text
stringlengths
4
222k
label
int64
0
4
As a first step, our system converts inflection tables into paradigms using a procedure given in Hulden (2014). The system generalizes concrete inflection tables by associating the common symbol subsequences shared by the words (the LCS) with vari- ables. These variables represent abstractions that at table reconstruction time can correspond to any sequence of one or more symbols. As many inflection tables of different words are identical after assigning the common parts to 'variables,' this procedure results in a comparatively small number of paradigms after being input a large number of inflection tables. The process is illustrated in Figure 1 . During generalization, the forms that gave rise to a particular paradigm are stored and later used for training a classifier to assign unknown base forms to paradigms. Having a number of paradigms at our disposal by this generalization method, the task of reconstructing an inflection table for an unseen base form in effect means picking the correct paradigm from among the ones generalized, a standard classification task of choosing the right/best paradigm. After seeing a number of inflection tables generalized into abstract paradigms as described above, the task we evaluate is how well complete inflection tables can be reconstructed from only seeing an unknown base form. To this end, we train a "one-vsthe-rest" linear multi-class support vector machine (SVM). 2 For each example base form wb i that is a member of paradigm p j , we extract all substrings from wb i from the left and right edges, and use those as binary features corresponding to the paradigm p j .For example, during training, the German verb lesen would have the following binary features activated: {#l, #le, #les, #lese, #lesen, #lesen#, lesen#, esen#, sen#, en#, n#}.Before applying the classifier to an unseen base form and reconstructing the corresponding inflection table, many competing paradigms can be ruled out as being ill-matched simply by inspecting the base form. For example, the infinitive for the paradigm containing the English verb sing is generalized as x 1 +i+x 2 . At classification time of a verb like run, this paradigm can be ruled out due to incompatibility, as there is no i in run, and so the infinitive cannot be generated. Likewise, the Icelandic paradigm seen in Figure 1 can be ruled out for the base form hest 'horse', as the base form does not containö. The SVM-classifier may indeed suggest such paradigm assignments, but such classifications are ignored and the highest scoring compatible paradigm is selected instead. These additional constraints on possible base form-paradigm pairings are a general feature of the LCS-strategy and are not at all tied to the classification method here.In order to eliminate noise features, we performed feature selection using the development set. We simultaneously tuned the SVM soft-margin penalty parameter C, as well as the length and type (prefix/suffix) of substrings to include as features. More concretely, we explored the values using a grid search over C = 0.01 . . . 5.0, with a growing sequence gap (Hsu et al., 2003) , as well as tuning the maximum length of anchored substring features to use (3 . . . 9), and whether to include prefix-anchored substrings at all (0/1). In the second experiment, where cross-validation was used, we performed the same tuning procedure on each fold's development set.
2
In this paper, we focus on sequence generation tasks such as question generation and summarization that have one-to-many relationship. More formally, given a source sequence x = (x 1 . . . x S ) ∈ X , our goal is to model a conditional multimodal distribution for the target sequence p(y|x) that assigns high values on p(y 1 |x) . . . p(y K |x) for K valid mappings x → y 1 . . . x → y K . For instance, Fig. 1 illustrates K = 3 different valid questions generated for a given passage.For learning a multimodal distribution, it is not appropriate to use a generic encoder-decoder (Cho et al., 2014) that minimizes for the expected value of the log probability of all the valid mappings.This can lead to a suboptimal mapping that is in the middle of the targets but not near any of them. As a solution, we propose to (1) introduce a latent variable called focus that factorizes the distribution into two stages, select and generate (Section 3.1), and (2) independently train the factorized distributions (Section 3.2).In order to factorize the multimodal distribution into the two stages (select and generate), we introduce a latent variable called focus. The intuition is that in the select stage we sample several meaningful focus, each of which indicates which part of the source sequence should be considered important. Then in the generate stage, each sampled focus biases the generation process towards being conditioned on the focused content.Formally, we model focus with a sequence of binary variable, each of which corresponds to each token in the input sequence, i.e. m = {m 1 . . . m S } ∈ {0, 1} S . The intuition is that m t = 1 indicates t-th source token x t should be focused during sequence generation. For instance, in Fig. 1 , colored tokens (green, red, or blue) show that different tokens are focused (i.e. values are 1) for different focus samples (out of 3). We first use the latent variable m to factorize p(y|x),EQUATIONwhere p φ (m|x) is selector and p θ (y|x, m) is generator. The factorization separates focus selection from generation so that modeling multimodality (diverse outputs) can be solely handled in the select stage and generate stage can solely concentrate on the generation task itself. We now describe each component in more details.Selector In order to directly control the diversity of the SELECTOR's outputs, we model it as a hard mixture of experts (hard-MoE) (Jacobs et al., 1991; Eigen et al., 2014) , where each expert specializes in focusing on different parts of the source sequences. In Fig. 1 and 2, focus produced by each SELECTOR expert is colored differently. We introduce a multinomial latent variable z ∈ Z, where Z = {1 . . . K}, and let each focus m be assigned to one of K experts with uniform prior p(z|x) = 1 K . With this mixture setting, p(m|x) is recovered as follows.EQUATIONWe model SELECTOR with a single-layer Bidirectional Gated Recurrent Unit (Bi-GRU) (Cho et al., 2014) followed by two fully-connected layers and a Sigmoid activation. We feed (current hidden state h t , first and last hidden state h 1 , h S , and expert embedding e z ) to a fully-connected layers (FC). Expert embedding e z is unique for each expert and is trained from a random initialization. From our initial experiments, we found this parallel focus inference to be more effective than auto-regressive pointing mechanism (Vinyals et al., 2015; Subramanian et al., 2018) . The distribution of the focus conditioned on the input x and the expert id z is the Bernoulli distribution of the resulting values,EQUATIONTo prevent low quality experts from dying during training, we let experts share all parameters, except for the individual expert embedding e z . We also reuse the word embedding of the generator in the word embedding of SELECTOR to promote cooperative knowledge sharing between SELECTOR and generator. With these parameter sharing techniques, adding a mixture of SELECTOR experts increases only a slight amount of parameters (GRU, FC, and e z ) to sequence-to-sequence models.Generator For maximum diversity, we sample one focus from each SELECTOR expert to approximate p φ (m|x). For a deterministic behavior, we threshold o z t with a hyperparameter th instead of sampling from the Bernoulli distribution. This gives us a list of focus m 1 . . . m K coming from K experts. Each focus m z = (m z 1 . . . m z S ) is encoded as embeddings and concatenated with the input embeddings of the source sequence x = (x 1 . . . x S ). An off-the-shelf generation function such as encoder-decoder can be used for modeling p(y|m, x), as long as it accepts a stream of input embeddings. We use an identical generation function with K different focus samples to produce K different diverse outputs. Data:D = {(x (i) , y (i) , m guide (i) )} N i=1 1 for i ∈ {1 . . . N } do / * Selector p φ (m|x, z) E-step * / 2 for z ∈ {1 . . . K} do 3 L (i) z select = − log p φ (m guide (i) |x (i) , z) 4 end 5 z best (i) = argmin z L (i) z select / * Selector p φ (m|x, z) M-step * / 6 φ z best (i) = φ z best (i) − α∇ φ z best (i) L (i) z best (i) select / * Generator p θ (y|x, m) Update * / 7 L (i) gen = − log p θ (y (i) |x (i) , m guide (i) ) 8 θ = θ − α∇ θ L (i) gen 9 end 3.2 TrainingMarginalizing the Bernoulli distribution in Eq. 3 by enumerating all possible focus is intractable since the cardinality of focus space 2 S grows exponentially with source sequence length S. Policy gradient (Williams, 1992; Yu et al., 2017) or Gumbel-softmax (Jang et al., 2017; Maddison et al., 2017) are often used to propagate gradients through a stochastic process, but we empirically found that these do not work well. We instead create focus guide and use it to independently and directly train the SELECTOR and the generator. Formally, a focus guidem guide = (m guide 1 . . . m guide S) is a simple proxy of whether a source token is focused during generation. We set t-th focus guide m guide t to 1 if t-th source token x t is focused in target sequence y and 0 otherwise. During training, m guide acts as a target for SELECTOR and is a given input for generator (teacher forcing). During inference,m is sampled from SELECTOR and fed to the generator.In question generation, we set m guide tto 1 if there is a target question token which shares the same word stem with passage token x t . Then we set m guide t to 0 if x t is a stop word or is inside the answer phrase. In abstractive summarization, we generate focus guide using copy target generation by Gehrmann et al. (2018) , where they set source document token x t is copied if it is part of the longest possible subsequence that overlaps with the target summary.Alg. 1 describes the overall training process, which first uses stochastic hard-EM (Neal and Hinton, 1998; Lee et al., 2016) for training the SE-LECTOR and then the canonical MLE for training the generator. E-step (line 2-5 in Alg. 1) we sample focus from all experts and compute their losses − log p φ (m guide |x, z). Then we choose an expert z best with minimum loss.z best = argmin z − log p φ (m guide |x, z) (4)M-step (line 6 in Alg. 1) we only update the parameters of the chosen expert z best .Training Generator (line 7-8 in Alg.1) The generator is independently trained using conventional teacher forcing, which minimizes − log p θ (y|x, m guide ).
2
The survey contains a total of 29 questions (see Appendix A) of which 16 are open questions with free text answers. The remaining ones are a mixture of multiple choice and yes/no questions. The findings of this survey served as an important contribution to the final Strategic Research and Innovation Agenda 5 on Language Technologies for Multilingual Europe which was presented and discussed at the META-FORUM 2017 conference 6 on November 13/14 and published in its final version in December 2017. The survey is divided into three main parts covering (1) background, research interests and projects of the participants, (2) visions for a large-scale European Language Technology research and development programme and (3) ideas on talent generation and retention in Europe. This division allowed to capture an overview of current and on-going research activities and developments in the field in the first part, reaching early-stage as well as more senior community members. The second part was intended to gather more expert knowledge with regard to visions and concrete plans for future work, in particular steps and prerequisites needed for initializing a large-scale Human Language Technology Project tailored especially to Europe's demands and current opportunities. The third and final part addresses the current challenge of the brain drain the European LT (and also AI) community is experiencing. Participants were not obliged to answer all questions, but encouraged to fill in the ones they feel comfortable with. The survey was designed and set up using the service Typeform 7 , a software for building online forms (see Figure 1 ).
2
Our multimodal parallel corpus creation consists of three main stages: (1) movie sentence segmentation, (2) prosodic parameter extraction, and (3) parallel sentence alignment. The first and second stages can be seen as a monolingual data creation, as they take the audio and subtitle pairs as input in one language, and output speech/text/prosodic parameters at the sentence level. The resulting monolingual data from stages 1 and 2 are fed into stage 3, where corresponding sentences are aligned and reordered to create the corresponding parallel data. A general overview of the system is presented in Figure 1 .Let us discuss each of these stages in turn.This stage involves the extraction of audio and complete sentences from the original audio and the corresponding subtitles of the movie. For subtitles, the SubRip text file format 1 (SRT) is accepted. Each subtitle entry contains the following information: (i) start time, (ii) end time, and (iii) text of the speech spoken at that time in the movie. The subtitle entries do not necessarily correspond to sentences: a subtitle entry may include more than one sentence, and a sentence can spread over many subtitle entries; consider an example portion of a subtitle: The sentence segmentation stage starts with a preprocessing step in which elements that do not correspond to speech are removed. These include: Speaker name markers (e.g., JAMES: . . . ), text formatting tags, non-verbal information (laughter, horn, etc.) and speech dashes. Audio is initially segmented according to the timestamps in subtitle entries, with extra 0.5 seconds at each end. Then, each audio segment and its respective subtitle text are sent to the speech aligner software (Vocapia Scribe 2 ) to detect word boundaries. This pre-segmentation helps to detect the times of the words that end with a sentence-ending punctuation mark ('.', '?', '!', ':', '...'). Average word boundary confidence score of the word alignment is used to determine whether the sentence will be extracted successfully or not. If the confidence score is above a threshold of 0.5, the initial segment is cut from occurrences of sentence-endings. In a second pass, cut segments that do not end with a sentenceending punctuation mark are merged with the subsequent segments to form full sentences. We used Libav 3 library to perform the audio cuts.This stage involves prosodic parameter extraction for each sentence segment detected in stage 1. The ProsodyPro library (Xu, 2013) (a script developed for the Praat software (Boersma and Weenink, 2001 )) is used to extract prosodic features from speech. As input, ProsodyPro takes the audio ofSpeech style EPIC English, Italian, Spanish spontaneous/interpreted MSLT English, French, German constrained conversations EMIME Finnish/English, German/English prompted EMIME Mandarin Mandarin/English prompted MDA (Almeman et al., 2013) Four Arabic dialects prompted Farsi-English (Melvin et al., 2004) Farsi/English read/semi-spontaneous Table 1 : Some available parallel speech corpora. an utterance and a TextGrid file containing word boundaries and outputs a set of objective measurements suitable for statistical analysis. We run ProsodyPro for each audio and TextGrid pair of sentences to generate the prosodic analysis files. See Table 2 for the list of analyses performed by ProsodyPro (Information taken from ProsodyPro webpage 4 ). The TextGrid file with word boundaries is produced by sending the sentence audio and transcript to the word-aligner software and then converting the alignment information in XML into TextGrid format. Having word boundaries makes it possible to align continuous prosodic parameters (such as pitch contour) with the words in the sentence.This stage involves the creation of the parallel data from two monolingual data obtained from different audio and subtitle pairs of the same movie. The goal is to find the corresponding sentence s 2 in language 2, given a sentence s 1 in language 1. For each s 1 with timestamps (s s 1 , e s 1 ), s 2 is searched within a sliding window among sentences that start in the time interval [s s 1 -5, s s 1 + 5]. Among candidate sentences within the range, the most similar to s 1 is found by first translating s 1 to language 2 and then choosing the {s 1 , s 2 } pair that gives the best translation similarity measure above a certain threshold. For translation, the Yandex Translate API 5 and for similarity measure the Meteor library (Denkowski and Lavie, 2014) is used.
2
For the extraction of local grammar from a corpus of special language texts it is important to focus on the keywords. The patterns in which the keywords are embedded are assumed to comprise the principal elements of a subject specific local grammar.The manner in which we derive the local grammar is shown in the algorithm below (Figure 1) .ALGORITHM: DISCOVER LOCAL GRAMMAR 1. SELECT a special language corpus (SL, comprising Nspecial words and vocabulary VSpecial).i. USE a frequency list of single words from a corpus of texts used in day-to-day communications (SG comprising Ngeneral words and vocabulary Vgeneral)for example, the British National Corpus for the English language:Fgeneral:={f(w1),f(w2),f(w3)…….fVgeneral} ii. CREATE a frequency ordered list of words in SL texts is computed Fspecial:={f(w1), f(w2), f(w3)………} iii. COMPUTE the differences in the distribution of the same words in the two different corpora is computed using the in SG and SL: Weirdness (wi)= f(wi)special/f(wi)general* Ngeneral/Nspecial iv. CALCULATE z-score for the Fspeical zf(wi)=(f(wi)-fav_special)/σspecial 2. CREATE KEY a set of Nkey keywords ordered according to the magnitude of the two z-scores KEY:={key1, key2, key3……keyNkey) such that z(fkeyi) & z(weridnesskeyi)> 1 i. EXTRACT collocates of each Key in SL over a window of M word neighbourhood. ii. COMPUTE the strength of collocation using three measures due to Smadja (1994) : U-score, k, and z-score iii. EXTRACT sentences in the corpus that comprise highly collocating key-words ((U,ko,k1)>=(10,1,1)) iv. FORM Corpus SL' a. For each Sentencei in SL': b. COMPUTE the frequency of every word in Sentencei. c. REPLACE words with frequency less than a threshold value (fthreshold) by a place marker #; d. FOR more than one contiguous place marker, use* 3. GENERATE trigrams in SL'; note frequency of each trigram together with its position in the sentences:i. FIND all the longest possible contiguous trigrams across all sentences in SL' and note their frequency ii. ORDER the (contiguous) trigrams according to frequency of occurrence iii. (CONTIGUOUS) TRIGRAMS with frequency above a threshold form THE LOCAL GRAMMAR Figure 1 . Algorithm for the acquisition of localgrammar patterns.Briefly, given a specialist corpus (S L ), keywords are identified, and collocates of the keywords are extracted. Sentences containing key collocates are then used to construct a sub-corpus (S L '). The sub-corpus S L ' is then analyzed and trigrams above a frequency threshold in the subcorpus are extracted; the position of the trigrams in each of the sentences is also noted. The subcorpus is searched again for contiguous trigrams across the sentences: The sentences are analyzed for the existence of the trigrams in the correct position -if a trigram that, for example, is noted for its frequency as a sentence initial position, is found to co-occur with another frequent trigram that exists at the next position, then the two trigrams will be deemed to form a pattern. This process is continued until all the trigrams in the sentence are matched with the significant trigrams.The local grammar then comprises significant contiguous trigrams that are found. These domain specific patterns, extracted from the specialist corpus S L ' (and its constituent sub-corpus) are then used to extract similar patterns and information from a test corpus to validate the patterns thus found in the training corpus. Following is a demonstration of how the algorithm works using English and Arabic texts.We present an analysis of a corpus of financial news wire texts: 1204 news report produced by Reuters UK Financial News comprising 431,850 tokens. One of the frequent words in the corpus is percent-3622 occurrences, a relative frequency of 0.0084%. When the frequency of this keyword is looked up in the British National Corpus (100 million words), it was found that percent is 287 times more frequent in the financial corpus than in the British National Corpusthis ratio is sometimes termed weirdness (of special language); the weirdness of grammatical words the and to is unity as these tokens are distributed with the same (relative) frequency in Reuters Financial and the BNC. The z-score computed using the frequency of the token in the Reuters Financial is 12.64: the distribution of percent is 12 standard deviations above the mean of all words in the financial corpus. (The z-score computed for weirdness is positive as well). The heuristic here is this: a token is a candidate keyword if both its z-scores are greater than a small positive number. So percent -most frequent token with frequency and weirdness z-score over zerowas accepted as a keyword.The collocates of the keyword percent were then extracted by using mutual information statistics presented by Smadja (1994) . A collocate in this terminology can be anywhere in the vicinity of +/-N-words. The frequency at each neighbourhood is calculated and then used to compute the 'peaks' in the histogram formed by the neighbourhood frequencies and the strength of the collocation calculated on a similar basis. The keyword generally collocates with certain words that have frequencies higher than itselfthe upward collocates-and collocates with certain words that have lesser frequency -the downwards collocates (These terms were coined by John Sinclair). Upwards collocates are usually grammatical words and downwards collocates are lexical words -nouns, adjectives-and hence the downwards collocates are treated as candidate compound words. There were 46 collocates of percent in our corpus -34 downwards collocates and 12 upwards collocates. A selection of 5 downwards and upwards are shown in Table 1 Table 2 . Upward collocates of percent in a corpus of 431,850 words.The financial texts comprise a large number of numerals (integers and decimals) and these we will denote as <no>. The numerals collocate strongly with percent for obvious reasons. The collocates are then used to extract trigrams comprising the collocates that occur at particular positions in the various sentences of our corpus: Table 3 . Trigrams of percent.There are many other frequent patterns where the frequency of individual tokens is quite low but at least one member of the trigram has higher frequency: such low frequency tokens are omitted and marked by the (#) symbol. All the trigrams containing such tokens with at least two others are used to extract other significant trigrams. Sometimes more than one low frequency tokens precede or succeed high frequency tokens and they are denoted by the symbol (*) as shown in Table 4 . The search for contiguous trigrams leads to larger and more complex patterns, Table 5 . Some of top patterns of percent (<s> identifies a sentence boundary).Arabic is written from right to left and its writing system does not employ capitalization. The language is highly inflected compared to English; words are generated using a root-andpattern morphology. Prefixes and suffixes can be attached to the morphological patterns for grammatical purposes. For example, the grammatical conjunction "and" in Arabic is attached to the beginning of the following word. Words are also sensitive to the gender and number they refer to and their lexical structure change accordingly. As a result, more word types can be found in Arabic corpora compared to English of same size and type. Short vowels which are represented as marks in Arabic are also omitted from usual Arabic texts resulting in some words having same lexical structures but different semantics.These grammatical and lexical features of Arabic cause more complexity and ambiguity, especially for NLP systems designed for thorough processing of Arabic texts compared to English. A shallow and statistical approach for IE using texts of specialism can be useful to abstract many complexities of Arabic texts.Given a 431,563 word corpus comprising 2559 texts of Reuters Arabic Financial News and the same thresholds we used with the English corpus, percent (al-meaa, ‫)ا‬ is again the most frequent term with frequency and weirdness zscore greater than zero. It has 3125 occurrences (0.0072%), a frequency z-score of 19.03 and a weirdness of 76 compared against our Modern Standard Arabic Corpus (MSAC).There were 31 collocates of percent; 7 upwards and 23 downwards. The downwards collocates of percent appear to collocate with names of instruments i.e. shares and indices (Table 6) .The upwards collocate are with the so-called closed class words as in English like in, on and that (Table 7) .Freq U-score k-score Using the same thresholds the trigrams (Table 8 ) appear to be different from the English trigrams in that the words of movement are not included here -this is because Arabic has a richer morphological system compared to English and Financial Arabic is not as standardised as Financial English: however, it will not be difficult to train the system to recognise the variants of rose and fell in Financial Arabic. Table 9 . Some patterns of percent (almeaa, ‫.)ا‬
2
In this section, we describe our methodology for Visual Commonsense Generation. Section 3.1 gives our model architecture. Section 3.2 introduces our pretraining tasks as well as our self-training based data filtering technique. Figure 1 illustrates the architecture of our KM-BART. The backbone of our model is BART (Lewis et al., 2020) , which is a Transformer-based sequence-to-sequence autoencoder. We modify the original BART to adapt the model to crossmodality inputs of images and texts. We add special tokens to adapt the model to different pretraining/evaluation tasks. In the following subsections. We give the details of our visual feature extractor, the encoder, and the decoder.Following previous work on Vision-Language models (Tan and Bansal, 2019; Lu et al., 2019) , we use a convolution neural network pretrained on the COCO dataset to extract visual embeddings, which are subsequently fed to the Transformerbased cross-modal encoder. Specifically, we use the pretrained Masked R-CNN (He et al., 2017) from detectron2 2 . For each image, the pretrained Masked R-CNN proposes the bounding boxes for detected objects. The area within a bounding box is a Region of Interest (RoI). We leverage the intermediate representations of the RoIs in the Masked R-CNN to obtain fixed-size embeddings for RoIsV = {v 1 , . . . , v i , . . . , v N },where i is the index to RoIs, and N is the number of RoIs for an image. The visual embedding of the i-thRoI v i is v i ∈ R d ,where d is the embedding dimension. For each of the RoIs, the Masked R-CNN also outputs the class distribution p(v i ), which is later used for Masked Region Modeling.Following Lewis et al. (2020) , the encoder of our model is based on a multi-layer bidirectional Transformer. We introduce special tokens to adapt it to our pretraining and downstream evaluation tasks. Specifically, each example starts with a special token indicating the task type of the current example. For our pretraining task of Knowledge-Based Commonsense Generation (see Section 3.2.1), we use <before>, <after>, or <intent> as the starting special token. For Attribution Prediction and Relation Prediction (Section 3.2.2), we use <region caption>. Finally, for Masked Language Modeling and Masked Region Modeling, we use <caption>.Furthermore, to inform the model of different modalities of inputs, we add three sets of different special tokens: For images, we use <img> and </img> to indicate the start and the end of visual embeddings, respectively. For texts, we introduce different special tokens to distinguish between two sets of textual inputs: events and captions. Events are image descriptions which the model uses for reasoning about future/past events or present intents of characters in the commonsense generation task, while captions are for Masked Language Modeling, where linguistic information plays a more important role. Hence, to inform the model of these two types of textual inputs, we use <event> and </event> for events, and <mlm> and </mlm> for captions. In the following sections, we denote textual inputs of words and specical tokens by W = {w 1 , .., w T }, where T is the length of textual inputs. For a token w, its embedding is e ∈ R d , where d is the dimension of the embeddings.The decoder of our model is also a multi-layer Transformer. Unlike the encoder, which is bidirectional, the decoder is unidirectional as it is supposed to be autoregressive when generating texts. The decoder does not take the visual embeddings as inputs. Instead, we use embeddings of the special token <img feat> to replace the actual visual embeddings. For Masked Region Modeling and Masked Language Modeling, we use <cls> to replace the masked regions or words (see Figure 1) . The model should predict the masked words and the class distribution of the masked regions during pretraining.To pretrain our model, we use four image-text datasets: Conceptual Captions Dataset (Sharma et al., 2018) , SBU Dataset (Ordonez et al., 2011) , Microsoft COCO Dataset (Lin et al., 2014) and Visual Genome (Krishna et al., 2017) . In the remaining of this section, we use D to denote the individual datasets for each of the pretraining tasks. Statistics of the datasets are given in Table 1 . The above datasets consist of examples of parallel images and texts and are widely used in previous work (Tan and Bansal, 2019; Lu et al., 2019; Zhou et al., 2020; .The knowledge-based commonsense generation (KCG) task aims to improve the performance of KM-BART on the VCG task. We leverage knowledge induced from COMET (Bosselut et al., 2019) , which is a large language model pretrained on external commonsense knowledge graphs. We only use COMET to generate new commonsense descriptions on SBU and COCO datasets due to limits in computational power for pretraining. For each image-text pair, we use COMET to generate commonsense descriptions from the text using all five relations mentioned above. To adapt COMET generated commonsense knowledge to VCG, we consider relations xIntent and xWant from COMET as intent, xNeed as before, xReact and xEffect as after. In this way, we generate additional commonsense knowledge for SBU and COCO datasets. The newly generated dataset has more than 3.6 million examples (Table 3) . However, the generated commonsense knowledge is not always reasonable as only textual information is used while the visual information is completely ignored. To ease this problem, we further filter the dataset by employing a self-training based data filtering strategy. Self-Training Based Data Filtering Our strategy aims to filter the generated commonsense knowledge dataset so that the examples in the filtered dataset closely resemble the examples in the VCG dataset. To achieve this goal, we first initialize our KM-BART with BART parameters and finetune KM-BART on the VCG dataset for 30 epochs. The finetuned KM-BART already has a good performance on the VCG dataset with a CIDER score of 39.13 (see Table 4 ).We then leverage this finetuned model to evaluate the quality of commonsense descriptions generated by COMET. We feed the corresponding images, texts, and relations as inputs to the finetuned KM-BART and then compute the cross-entropy (CE) loss of COMET generated commonsense descriptions. We observe that commonsense descrip- : An example from the VCG dataset. We use nucleus sampling with p = 0.9 during decoding. We show the inference sentences from (1) full model § without event descriptions but with images as inputs;(2) full model † with event descriptions and images as inputs;(3) ground truth. Bold indicates inference sentences from our KM-BART ( † and § indicate corresponding models in tions with a lower CE loss make more sense than those with a higher CE loss. Notice that when computing the CE loss of the COMET generated commonsense descriptions, our KM-BART leverages both the textual inputs and the visual inputs. We provide examples of our data filtering strategy in Supplementary Material. We compute CE loss for all the commonsense descriptions in the VCG dataset and the new dataset generated by COMET. Figure 2 shows the distributions of CE loss for the two datasets. We observe that commonsense descriptions generated by COMET result in higher CE losses, which are expected as images are completely ignored when using COMET to generate natural language commonsense descriptions. We only keep the examples of which CE loss is below 3.5. Table 3 shows the statistics of generated datasets before and after data filtering. By filtering, we keep only 1.46 million examples, roughly accounting for 40% of the original examples.Finally, we leverage the newly generated commonsense knowledge dataset by pretraining KM-BART on it. We expect by pretraining, the model reaches higher performance on the VCG dataset. Let S = {w 1 , ..., w L } be a commonsense description of the newly generated dataset D, the loss function for KCG is:EQUATIONwhere L is the length of the generated sequence, l is the index to individual tokens in the target commonsense description S, V and W are visual inputs and textual inputs, respectively. θ represents model parameters to be optimized.The Visual Genome dataset consists of 2.3 million relationships and 2.8 million attributes. To utilize these data, we use the attribute prediction (AP) and Figure 2 : The distribution of the average cross-entropy on 10000 samples in the VCG dataset and our enhanced dataset. For the generated dataset, we can keep the examples of which cross entropy loss is below 3.5.the relation prediction (RP) as pretraining tasks, which enable the model to learn intrinsic properties among different objects in an image. In the AP task, we feed the output vectors of the decoder for each image feature into an MLP classifier. In the RP task, we concatenate two output vectors of the decoder for each image feature pair and feed it into another MLP classifier. We use the cross-entropy loss for both tasks.We denote the indices for AP by 1 ≤ j ≤ A, the indices for RP by 1 ≤ k ≤ R, where A is the number of AP examples, and R is the number of RP examples. We denote the label for the j-th AP example by L a (v j ), and the label for the k-th RP example as L r (v k 1 , v k 2 ), where v k 1 and v k 1 are the two RoIs of the current RP example. The loss function for the AP task is:EQUATIONAnd the loss function for the RP task is:EQUATIONFollowing previous works (Devlin et al., 2019; Liu et al., 2019) , we randomly mask the input textual tokens with a probability of 15% in the Masked Language Modeling (MLM) task. Within this 15% of the tokens, we use <mask> to replace the masked token with a probability of 80%, use a random token to replace with a probability of 10%, and keep the masked token unchanged with a probability of 10%. We denote the mask indices by 1 ≤ m ≤ M , where M is the number of masked tokens. We denote the masked token by w m , and the remaining tokens that are not masked by w \m , the loss function for MLM is defined as:EQUATIONIn the Masked Region Modeling (MRM) task, we sample image regions and mask the corresponding feature vectors with a probability of 15%. The masked vector will be replaced by a vector filled with zeros. The model needs to predict the distribution over semantic classes for the masked regions. The loss function is to minimize the KL divergence of the output distribution and the distribution predicted by the Masked R-CNN used in visual features extraction. We denote the mask indices by 1 ≤ n ≤ N , where N is the number of masked regions. We let p(v n ) denote the class distribution of the masked region v n detected by Masked R-CNN, q θ (v n ) denote the class distribution output by our model, the loss function for MRM is then:EQUATIONTo combine all the losses we described above, we weight each of the losses byW KCG , W AP , W RP , W M LM , W M RM ∈ R.The weights are chosen to roughly balance every term during the training phase. The final loss is: L = W KCG L KCG + W AP L AP + W RP L RP + W M LM L M LM + W M RM L M RM (6)
2
Our method predicts a variable length text span given a fixed length context from either side. We rely on the self-attentive Transformer model (Vaswani et al., 2017) with learned position embeddings, where the encoder takes the context as input and the decoder predicts the missing span. Architecture details and training parameters are in the Appendix. We use the subword tokenizer from (Vaswani et al., 2017) , but report all statistics except perplexity in term of proper words. In addition to the context, we also condition our base model on the desired output length. We append to the input sequence a marker token denoting one of 5 possible length bins Fan et al. (2018a) . Length conditioning lets us compare different models and decoding strategies with the same average generation length, thus avoiding length preference biases in human evaluation.In our proposed approach, we decompose the generation task hierarchically, sampling a set of words desired for generation, before generating text that includes these words. Word Prediction For each infilling instance, our model ingests the context data and predicts a sequence of subwords in frequency order, starting with rare subwords first. The word prediction model is a standard Transformer, for which we prepare the training data such that the target subwords are reordered by increasing frequency.Our motivation for frequency ordering is twofold. Conceptually, rare words have a denser information content in an information-theoretic sense (Sparck Jones, 1972; Shannon, 1948) , i.e., it is easier to predict the presence of common words given nearby rare words than the opposite. Practically, predicting rare words first allows us to interrupt decoding after a fixed number of steps, thenwere filled with anger, and decided not to go fishing again, but to wait for the next appearance of the fire. But after many days had passed without their seeing the fire, they went fishing again, and behold, there was the fire! hand he held an iron club, which he dragged after him with its end on the ground; and, as it trailed along, it tore up a track as deep as the furrow a farmer ploughs with a team of oxen. The horse he cave, whose mouth is beneath the sea.Here was a broad, dry space with a lofty, salt-icicled roof. The green, translucent sea, as it rolled back and forth at their feet, gave to their brown faces aAnd so they were continually tantalized. Only when they were out fishing would the fire appear, and when they led was even larger in proportion than the giant himself, and quite as ugly. His great carcass was covered all over ghastly white glare. The scavenger crabs scrambled away over the dank and dripping stones, and the loathsome biting eel, slowly reached HIER-3 and there was a shout of joy from all the people who went fishing thither, but when they rode was a lazy ox. He was a very ugly man. He was a man faint intake of breath, whence it rose and curled, as it were, into the sea. And now it stretched HIERmax and thither they gathered together at a strong pace, for it was useless to go fishing at home, and when another shout was missing stood in lazy work. You could see that he was a big, ugly ox, shining intake of air, whence the black bear curled up on the surface of the water, and turned its head to lookand they could not find it. They could not find it, and when the fire was rode was a man of about thirtyfive years of age. He was a tall man, look of horror and horror. It seemed as if it would burst into a flood, and burst upon them, and burst BASE sam-pling10 and the fire, which had been so long gone that many had not been in it for years, and when the fire had driven was a little man of about the size of a man, with shaggy mane, and deep, almost awful, impression, like that which was seen on a rock on a rocky beach. But the kangaroo did not stretch BASE sampling and at last there was a fierce fire! And at last Rosetta had an arrow, and when Oui wheeled in without pausing to speak to me was a grotesque specimen of some repulsive animal. He was short of stature, flood of radiance, sufficient to kill them utterly. [Illustration: It certainly had not a fairy named Serpent] The monster had cast RC returned they could not find it. This was the way of it. The curlytailed alae knew that Maui and Hina had only these four sons, and if any of them stayed on shore to watch the fire while the others were out with tangled scraggy hair, of a sooty black; you could count his ribs and all the points of his big bones through his hide; his legs were crooked and knotty; his neck was twisted; and as for his jaws, they were out its well-toothed, wide-gaping jaw to tear the tender feet that roused it from its horrid lair, where the dread sea god dwelt. The poor hapless girl sank down upon this gloomy shore and cried, clinging to the kan delegate the prediction of more common words to our second-stage model.The secondstage model, also a Transformer, is responsible for generating a text span given the surrounding context, a desired length marker, and a list of words predicted by the first-stage model. It takes as input the concatenation of these three signals. At training time, we select a list of k words from the missing span to condition on, where k is sampled uniformly between 0 and half the target length. At inference, this model takes conditioning words from the word generation model introduced above. Interestingly, such a word list could be edited interactively by writers, which we defer to future work.Training with a variable number of conditioning words allows us to choose the number of provided words at inference time. We observe that this choice needs to balance sufficient information to influence coherence and novelty in generated spans, while preserving some headroom for the second stage model to suggest its own common words and produce fluent text. Some examples of the unusual wording choices made when the second stage model is conditioned on all predicted words (HIER-max) can be seen in Table 1 .
2
Sentences in the CLC contain one or more error corrections, each of which is labeled with one of 75 error types (Nicholls, 2003) . Error types include countability errors, verb tense errors, word order errors, etc. and are often predicated on the part of speech involved. For example, the category AG (agreement) is augmented to form AGN (agreement of a noun) to tag an error such as "here are some of my opinion". For ease of analysis and due to the high accuracy of state-of-the-art POS tagging, in addition to the full 75 class problem we also perform experiments using a compressed set of 15 classes. This compressed set removes the part of speech components of the error types as shown in Figure 1 .We create a dataset of corrections from the CLC by extracting sentence pairs (x, y) where x is the original (student's) sentence and y is its corrected form by the teacher. We create multiple instances out of sentence pairs that contain multiple corrections. For example, consider the sentence "With this letter I would ask you if you wuld change it". This consists of two errors: "ask" should be replaced with "like to ask" and "wuld" is misspelled. These are marked separately in the CLC, and imply the corrected sentence "With this letter I would like to ask you if you would change it". Here we extract two instances consisting of "With this letter I would ask you if you would change it" and "With this letter I would like to ask if you wuld change it", each paired with the fully corrected sentence. As each correction in the CLC is tagged with an error type t, we then form a dataset of triples (x, y, t). This yields 45080 such instances. We use these data in crossvalidation experiments with the feature based Max-Ent classifier in the Mallet (McCallum, 2002) software package.
2
The proposed models are described in terms of feature engineering to extract the required features followed by description of the classifiers.Framework of the proposed methodology for CoHope-ML and CoHope-NN consists of a step of preprocessing the train and test data followed by feature engineering module to extract features and use them to train and test the models.Preprocessing steps includes converting emojis to corresponding text (using emoji library 6 ), removing punctuations, words of length less than 2, unwanted characters (such as !()-[];:'" ¡¿./?$=% +@* ', etc.) and converting text to lowercase.Feature engineering module uses everygrams 7Input text Extracted features "yuvanvera level ya."(in Ta-En) yu , uv, va, an, n , v, ve, er, ra, a , l, le, ev, ve, el, l , y, ya, yuv, uva,van, an , ve, ver, era, ra , le, lev, eve, vel, el , ya, yuva, uvan, van , ver, vera, era , lev, leve, evel, vel , yuvan, uvan , vera, vera , leve, level, evel , yuvan , vera , level, level ,yuvanvera, level, ya Tables 1 and 2 give samples of input texts and features extracted from the corresponding texts.The proposed models are described below:There are various notions of ensemble learning such as bagging, stacking, etc. Due to simplicity and efficiency of bagging method, CoHope-ML model is developed as a hard voting classifier based on bagging by ensembling three sklearn 9 classifiers, Logistic Regression (LR), eXtreme Gradient Boosting (XGB) (Chen and Guestrin, 2016) and Multi-Layer Perceptron (MLP) 10 . Idea behind ensembling simple classifiers as estimators is to build a robust classifier utilizing the strength of each classifier. Parameters used for each estimator are given in Table 3 . CoHope-ML model is trained on TFIDF vectors obtained in feature engineering module. The framework of CoHope-ML is shown in Figure 1 . The steps involved in designing CoHope model are described below:Training Tokenizer: Romanized text from Dakshina dataset (Roark et al., 2020) combined with code-mixed texts from (Chakravarthi et al., 2020c) and (Chakravarthi et al., 2020a) are preprocessed and used to train a byte-level Byte-pair encoding tokenizer 12 with a vocab size of 52000 words and min frequency of 2 (separately for each language pairs Ma-En and Ta-En). The resulting tokenizer is later used in training BERT LM.Training BERT LM: BERT LM is trained using the trained tokenizer and raw texts used in previous step and transformers library 13 with following configurations:• vocab size=52 000• max position embeddings=514• num attention heads=12• num hidden layers=6• type vocab size=1 11 https://keras.io/ 12 https://huggingface.co/transformers/ tokenizer summary.html 13 https://pypi.org/project/transformers/ Table 4 gives summary of the layers in BiLSTM-Conv1D model and the frame work of CoHope-TL is shown in Figure 3 .
2
Our proposed method consists of two steps. The first involves inducing training examples with two data augmentation models. Next, a task-specific classifier is trained on both the original and the newly generated training instances, with adversarial perturbation for improved robustness and generalization.Reorder augmentation is based on the intuition of making a model more robust with respect to differences in word order typology. If our training examples consist entirely of instances from a language L S with a fairly strict subject-verb-object (SVO) word order such as English, the model will be less well equipped to pay attention to subtle semantic differences between sentences from a target language L T obeying subject-object-verb (SOV) order. To alleviate this problem, we can rely on auxiliary data to diversify the training data. For this, we obtain word alignments for unannotated bilingual parallel sentence pairs covering L S and an auxiliary language L A that need not be the same as L T . We then reorder all source sentences to match the word order of L A based on the alignments, and train a model to apply such reordering on the NLI training instances. Formally, suppose we have obtained l unlabelled parallel sentences in the source language L S and in the auxiliary language L A , C = {( s i , a i | i = 1, ..., l}, where s, a is a source-auxiliary language sentence pair. Based on a word alignment model, in our case FastAlign (Dyer et al., 2013) , which uses Expectation Maximization to compute the lexical translation probabilities, we obtain a word pair table for each sentence pair s, t , denoted as A(s, a) =Source ! " # $ % & ' ( ) * "! "" "# Word Alignment Target ! " # $ % & ' ( ) * "! "" "# "$ Reordered Source ! " # ( $ % ) * "! ' "#{(i 1 , j 1 ), ..., (i m , j m )}.Following the word order of L A , we then reorder the source sequence s by consulting the table A(s, t), yielding the new sentence pair s,s . Next, we consider a pretrained Seq2Seq model, denoted as r(•; θ). The model is assumed to have been pretrained with an encoder and a decoder in the source language, and we fine-tune this generative model by training on the new parallel corpusC = {( s i ,s i | i = 1, ..., l}. This generative Seq2Seq model can then reorder the sequences in the la-beled training dataset D = {(x i , y i ) | i = 1, ..., n},where n is the number of labeled instances, each x i consists of a sequence pair s 1 , s 2 , and each y i ∈ Y is the corresponding ground truth label describing their relationship.Our second augmentation strategy involves training a controllable model that, given a sentence and a label describing the desired relationship, seeks to emit a second sentence that stands in said relationship to the input sentence. Thus, given an existing training sentence pair, we can consider different variations of one sentence in the pair and invoke the model to generate a suitable second sentence. However, such automatically induced samples from SA are inordinately noisy, precluding their immediate use as training data, so we exploit a large pretrained Teacher model trained on available source language samples to rectify the labels of these synthetic samples with appropriate strategies.Generation. As we wish to be able to control the label of a generated example, the requested label is prepended to the input as a (textual) prefix before it is fed into a Seq2Seq model. We adopt the groundtruth label of each example as the respective prefix, resulting in a new input sequence (y i : s 1 ) coupled with s 2 as the desired output forming a training pair for the generation model.Given the resulting labeled training dataset D SA , we can fine-tune a pretrained Seq2Seq model, denoted as g(•; θ). This generative Seq2Seq model can then be invoked for semantic data augmentation to generate new training instances. For each (ȳ : s 1 ) as a labeled input sequence, wherē y ∈ Y \ {y i }, we generate ans 2 via the fine-tuned Seq2Seq model, yielding a new training instance ( s 1 ,s 2 ,ȳ).Label Rectification. The semantic augmentation inducess 2 automatically based on s 1 and the requested labelȳ. However, the obtaineds 2 may not always genuinely have the desired relationshipȳ to s 1 . Thus, we treat this data as inherently noisy and propose a rectifying scheme based on a Teacher model. We wish for this Teacher to be as accurate as possible, so we start off with a large pretrained language model specifically for the source language L S , which we assume obtains a better performance on L S than a pretrained multilingual model. We train the Teacher network h(•; θ) in K epochs using the set of original labeled data D. This teacher model is then invoked to verify and potentially rectify labels from the automatically induced augmentation data Dã = {(x i , y i ) | i = 1, ..., m} obtained in the previous step (where m is the number of instances). We assume (ỹ i , c) = h(x i ; θ) denotes the predicted label along with the confidence score c ∈ [0, 1] emitted by the classifier, and assume a confidence threshold T has been predetermined. There are several strategies to determine the final labels.• Teacher Strategy:We adopt D r = {(x i ,ỹ i ) | (x i , y i ) ∈ Dã, (ỹ i , c) = h(x i ), c > T }, i.e.,when the confidence score is above T , we believe the Teacher model is sufficiently confident to ensure a reliable label, while other instances are discarded.• TR Strategy: An alternative scheme is to instead adoptD r = {(x i , Φ(y i ,ỹ i , c)) | (x i , y i ) ∈ Dã, (ỹ i , c) = h(x i )}, where Φ(y i ,ỹ i , c) = ỹ i c > T y i otherwiseHere, labels remain unchanged when Teacher predictions match the originally requested labels. In case of an inconsistency, we adopt the Teacher model's label if it is sufficiently confident, and otherwise retain the requested label.Upon completing the two kinds of data augmentation, we possess synthesized data that is substantially less noisy, denoted as D r , which can be incorporated into the original training data D to yield the final augmented training set D a = D ∪ D r . With this, we proceed to train a new model f (•; θ) for the final cross-lingual sentence pair classification. As a special training regimen, we adopt adversarial training, which seeks to minimize the maximal loss incurred by label-preserving adversarial perturbations (Szegedy et al., 2014; Goodfellow et al., 2015) , thereby promising to make the model more robust. Nonetheless, the gains observed from it in practice have been somewhat limited in both monolingual and cross-lingual settings. We conjecture that this is because it has previously merely been invoked as an additional form of monolingual regularization (Miyato et al., 2017) .In contrast, we hypothesize that adversarial training is particularly productive in a cross-lingual framework when used to exploit augmented data, as it encourages the model to be more robust towards the divergence among similar words and word orders in different languages and to better adapt to the new modestly noisy data. This hypothesis is later confirmed in our experimental results.Adversarial training is based on the notion of finding optimal parameters θ to make the model robust against any perturbation r within a norm ball on a continuous multilingual (sub-)word embedding space. Hence, the loss function becomes:EQUATIONwherer adv (x i , y i ) = argmax r,||r||≤ L(f (x i + r;θ), y i )Generally, a closed form for the optimal perturbation r adv (x i , y i ) cannot be obtained for deep neural networks. Goodfellow et al. (2015) proposed approximating this worst case perturbation by linearizing f (x i ;θ) around x i . With a linear approximation and an L 2 norm constraint in Equation 2, the adversarial perturbation isr adv (x i , y i ) ≈ g(x i , y i ) ||g(x i , y i )|| 2 (2)whereg(x i , y i ) = ∇ x i L(f (x i ;θ), y i ).However, neural networks are typically not linear even over a relatively small region, so this approximation cannot guarantee to achieve the best optimal point within the bound. Madry et al. (2017) demonstrated that projected gradient descent (PGD) allows us to find a better perturbation r adv (x i , y i ).In particular, for the norm ball constraint ||r|| ≤ , given a point r 0 , Π ||r||≤ aims to find a perturbation r that is closest to r 0 as follows:EQUATIONTo find more optimal points, K-step PGD is needed during training, which requires K forwardbackward passes through the network. With a linear approximation and an L 2 norm constraint, PGD takes the following step in each iteration:EQUATIONwhereg(x i , y i , r t ) = ∇ rt L(f (x i + r t ;θ), y i )Here, α is the step size and t is the step index.3 Experiments and Analysis
2
The only system that our team submitted for the SMG-CH subtask is an ensemble model based on the XGBoost meta-learner, as illustrated in Figure 1. In this section, we describe the three machine learning techniques that provide their predictions as input for the meta-learner, as well as the gradient boosting method that combines the independent models.String Kernels (Lodhi et al., 2001 ) provide a way of comparing two documents, based on the inner product generated by all substrings of length n, typically known as character n-grams. Being relatively simple to use and implement, this technique has many applications according to the literature (Cozma et al., 2018; Giménez-Pérez et al., 2017; Masala et al., 2017; Ionescu et al., 2014 Popescu and Ionescu, 2013) , with emphasis on dialect identification and the good results obtained for this task in previous VarDial evaluation campaigns (Butnaru and Ionescu, 2018b; Ionescu and Butnaru, 2017; . Similar to our last year's submission for the SMG-CH subtask (Gȃman and Ionescu, 2020), we employ the string kernels computed by the efficient algorithm introduced by Popescu et al. (2017) . This gives us a dual representation of the data, through a kernel matrix where the cell on row i and column j represents the similarity between two text samples x i and x j . In our experiments, we consider the presence bits string kernel (Popescu and Ionescu, 2013) as the similarity function. For two strings x i and x j over a set of characters S, the presence bits string kernel is defined as follows: where n is the length of n-grams and #(x, g) is a function that returns 1 when the number of occurrences of n-gram g in x is greater than 1, and 0 otherwise.EQUATIONThe resulting kernel matrix is plugged into a ν-Support Vector Regression (ν-SVR) model. SVR (Drucker et al., 1997 ) is a modified Support Vector Machines (SVM) (Cortes and Vapnik, 1995) model that is repurposed for regression. Similar to SVM, SVR uses the notion of support vectors and margin in order to find an optimal estimator. However, instead of a separating hyperplane, SVR aims to find a hyperplane that estimates the data points (support vectors) within the margin with minimal error. In our experiments, we employ an equivalent SVR formulation known as ν-SVR (Chang and Lin, 2002) , where ν is the configurable proportion of support vectors to keep with respect to the number of samples in the data set. Using ν-SVR, the optimal solution can converge to a sparse model, with only a few support vectors. This is especially useful in our case, as the data set provided for the SMG-CH subtask does not contain too many samples. Another reason to employ ν-SVR in our regression task is that it was found to surpass other regression methods for other use cases, such as complex word identification (Butnaru and Ionescu, 2018a) .Characters are the base units in building words that exist in the vocabulary of most languages. Among the advantages of working at the character level, we enumerate (i) the neutrality with respect to language theory (independence of word boundaries, semantic structure or syntax) and (ii) the robustness to spelling errors and words that are outside the vocabulary (Ballesteros et al., 2015) . These explain the growing interest in using characters as features in various language modeling setups (Al-Rfou et al., 2019; Ballesteros et al., 2015; Sutskever et al., 2011; Wood et al., 2009; Zhang et al., 2015) .Word embeddings are vectorial word representations that associate similar vectors to semanti-cally related words, allowing us to express semantic relations mathematically in the generated embedding space. From the initial works of Bengio et al. (2003) and Schütze (1993) to the recent improvements in the quality of the embedding and the training time (Collobert and Weston, 2008; Mikolov et al., 2013a,b; Pennington et al., 2014) , generating meaningful representations of words became a hot topic in the NLP research community. These improvements, and many others not mentioned here, have been extensively used in various NLP tasks (Garg et al., 2018; Glorot et al., 2011; Musto et al., 2016) .Considering the sometimes orthogonal benefits of character and word embeddings, an intuitive idea has emerged, namely that of combining the character and word representations, which should complement each other in various aspects and provide better meaningful cues in the learning process of hybrid neural architectures (Liang et al., 2017) . Thus, throughout the experiments performed in this work, we choose to employ a hybrid convolutional neural network working at both the character level (Zhang et al., 2015) and the word level (Kim, 2014) . The hybrid architecture concatenates two CNNs, out of which one is equipped with a character embedding layer and the other has an analogous word embeddings layer. The networks are able to automatically learn a 2D representation of text formed of either character or word embedding vectors, that are further processed by convolutional and fullyconnected layers.The last convolutional activation maps of our two CNNs sharing similar architectural choices are concatenated in what we call a hybrid network (Liang et al., 2017) , with the aim of accurately and simultaneously predicting the two location components required for the geolocation task. The first component of the hybrid network is a characterlevel CNN, which takes the first and last 250 characters in the input and encodes each character with its position in the alphabet, then learns end-to-end embeddings for each character, as vectors of 128 components. The second CNN used as part of the hybrid network operates at the word level and it receives as input each sample encoded, initially, as an array of 100 indexes, corresponding to the position of each word in the vocabulary. As part of the pre-processing for the word-level CNN, we split the initial text into words, keeping the first 50 words and the last 50 words in the sample. In the end, we employ the German Snowball Stemmer (Weissweiler and Fraser, 2018) to reduce each word to its stem, in an effort to reduce the vocabulary size by mapping variations of the same word to a single vocabulary entry. The word-level CNN is also equipped with an embedding layer, learning end-to-end word representations as vectors of length 128.Each of the two CNNs has three convolutional (conv) layers placed after the initial embedding layer. The number of filters decreases from 1024 for the first conv layer to 728 for the second conv layer and to 512 for the third conv layer. Each conv layer is equipped with Rectified Linear Units (ReLU) (Nair and Hinton, 2010) as the activation function. The convolutional filter sizes differ across the two convolutional architectures. Hence, we use kernels of sizes 9, 7 and 7 for the char CNN. In the same time, we choose 7, 5 and 3 as appropriate filter sizes for the conv layers of the word CNN. In the char CNN, each conv layer is followed by a max-pooling layer with filters of size 3. In the word CNN, we add max-pooling layers only after the first two conv layers. The pooling filter sizes are 3 for the first pooling layer and 2 for the second pooling layer. The activation maps resulting after the last conv blocks of the char and the word CNNs are concatenated and the hybrid network continues with four fully-connected (fc) layers with ReLU activations. The fc layers are formed of 512, 256, 128 and 64 individual neural units, respectively.Transformers (Vaswani et al., 2017) represent an important advance in NLP, with many benefits over the traditional sequential neural architectures. Based on an encoder-decoder architecture with attention, transformers proved to be better at modeling long-term dependencies in sequences, while being effectively trained as the sequential dependency of previous tokens is removed. Unlike other contemporary attempts at using transformers in language modeling (Radford et al., 2018) , BERT (Devlin et al., 2019) builds deep language representations in a self-supervised fashion and incorporates context from both directions. The masked language modeling technique enables BERT to pretrain these deep bidirectional representations, that can be further fine-tuned and adapted for a variety of downstream tasks, without significant architectural updates. We also make use of this property in the current work, employing the Hugging Face (Wolf et al., 2020) version of the cased German BERT model 1 . The model was initially trained on the latest German Wikipedia dump, the OpenLe-galData dump and a collection of news articles, summing up to a total of 12 GB of text files. We fine-tune this pre-trained German BERT model for the geolocation of Swiss German short texts, in a regression setup. The choice of hyperparameters is, in part, inspired by the winning system in the last year's SMG-CH subtask at VarDial (Scherrer and Ljubešić, 2020) .Gradient tree boosting (Friedman, 2001 ) is based on training a tree ensemble model in an additive fashion. This technique has been successfully used in classification (Li, 2010) and ranking (Burges, 2010) problems, obtaining notable results in reputed competitions such as the Netflix Challenge (Bennett and Lanning, 2007) . Furthermore, gradient tree boosting is the ensemble method of choice in some real-world pipelines running in production (He et al., 2014) . XGBoost (Chen and Guestrin, 2016) is a tree boosting model targeted at solving large-scale tasks with limited computational resources. This approach aims at parallelizing tree learning while also trying to handle various sparsity patterns. Overfitting is addressed through shrinkage and column subsampling. Shrinkage acts as a learning rate, reducing the influence of each individual tree. Column subsampling is borrowed from Random Forests (Breiman, 2001) , bearing the advantage of speeding up the computations. In the experiments, we employ XGBoost as a metalearner over the individual predictions of each of the models described above. We opted for XG-Boost in detriment of average voting and a ν-SVR meta-learner, both providing comparatively lower performance levels in a set of preliminary ensemble experiments.
2
In this section, we explain how our method can be applied to text-to-SQL parsing.Formally, the labeled data for text-to-SQL parsing is given as a set of triples (x, d, y), and each triple represents an utterance x, the corresponding SQL query y and relational database d. A probabilistic semantic parser is trained to maximize p(y|x, d).The goal of this work is to learn a generative model of q(x, y|d) given databases such that it can synthesize more data (i.e., triplets) for training a semantic parser p(y|x, d). Note that we use different notations q and p to represent the generative model and the discriminative parser, respectively, where p(y|x, d) is not a posterior distribution of q. Instead, p is a separate model trained with different parameterization with q. This is primarily due to the intractability of posterior inference of q(y|x, d).Specifically, we use a two-stage process to model the generation of utterance-SQL pairs as follows: Figure 2 : A simplified ASDL grammar for SQL, where "sql, select, cond, agg" stands for variable types, "where, agg_id" for variable names, and "And, Or, Not" for constructor names.EQUATIONwhere q(y|d) models the distribution of SQLs given a database, and q(x|y, d) models the translation process from SQL to utterances.3.2 Database-Specific PCFG: q(y|d)We use abstract syntax trees (ASTs) to model the underlying grammar of SQL, following Yin and Neubig (2018) and Wang et al. (2020b) . Specifically, we use ASDL (Wang et al., 1997) formalism to define ASTs. To illustrate, Figure 2 shows a simplified ASDL grammar for SQL. The ASDL grammar of SQL can be represented by a set of contextfree grammar (CFG) rules, as elaborated in the Appendix. By assuming the strong independence of each production rule, we model the probability of generating a SQL as the product of the probability of each production rule q(y) = N i= q(T i ). It is well known that estimating the probability of a production rule via maximum-likelihood training is equivalent to simple counting, which is defined as follows:EQUATIONwhere C is the function that counts the number of occurrences of a production rule.3.3 SQL-to-utterance Translation:q(x|y, d)With generated SQL queries at hand, we then show how we map SQLs to utterances to obtain more paired data. We notice that SQL-to-utterance translation, which belongs to the general task of conditional text generation, shares the same output space with summarization and machine translation. Fortunately, pre-trained models (Devlin et al., 2019; Radford et al., 2019) using self-supervised methods have shown great success for conditional text generation tasks. Hence, we take advantage of a contemporary pre-trained model, namely BART , which is an encoder-decoder model that uses the Transformer architecture (Vaswani et al., 2017) .To obtain a SQL-to-utterance translation model, we fine-tune the pre-trained BART model with our parallel data, with SQL being the input sequence and utterance being the output sequence. Empirically, we found that the desired translation model can be effectively obtained using the SQLutterance pairs at hand, although the original BART model is designed for text-to-text translation only.After obtaining a trained generative model q(x, y|d), we can sample synthetic pairs of (x, y) for each database d. The synthesized data will then be used as a complement to the original training data for a semantic parser. Following Yu et al. (2020), we use the strategy of first pre-training a parser with the synthesized data, and then finetuning it with the original training data. In this manner, the resulting parameters encode the compositional inductive bias introduced by our generative model. Another way to view pre-training is that a parser p(y|x, d) is essentially trained to approximate the posterior distribution of q(y|x, d) via massive samples from q(x, y|d).
2
We now describe the techniques that we apply to the 3.1 million Yelp reviews from Section 3 for COVID-19 aspect analysis (Section 4.1) and time series analysis (Section 4.2), leveraging the labeled reviews of Section 3.2.First, we extract topics from reviews using unsupervised topic modeling. We train Latent Dirichlet Allocation (LDA) topic models (Blei et al., 2003) with different numbers of topics (5, 10, 25, 50, 100) . Then, we manually annotate the obtained topics with descriptive labels by examining the highestprobability words for each topic. We noticed that it is hard to align the topics discovered by LDA with the COVID aspects of interest (Section 3.2) and, therefore, we experiment with supervised and weakly-supervised techniques, as discussed next.We use our annotated dataset from Section 3.2 to train and evaluate review classifiers (via 5-fold cross-validation) for multi-class COVID-19 aspect classification. We consider two alternative training procedures: fully-supervised classification using labeled training data, and weakly-supervised classification using a small number of indicative keywords per class. The fully-supervised approaches are standard and listed at the end of this subsection.The weakly-supervised approach we use is the co-training method of Karamanolakis et al. (2019) , which works as follows. First, we manually define a small number of keywords or key phrases for each COVID-19 aspect. 5 Then, we employ a teacher-student architecture, where the teacher classifier considers keywords to annotate unlabeled reviews with aspects and the teacher-labeled reviews are used to train a student classifier. The teacher classifier does not require training and instead predicts aspect probabilities proportionally to keyword counts for each aspect. If no keywords appear in a review, then the teacher predicts the "Non-COVID" aspect. The student classifier can be any classifier, and here we consider both stan-5 Hygiene: "masks," "gloves," "mask," "glove," "shield," "sanitize," "sanitizer," "sanitizing," "disinfect," "disinfecting," "face cover," "covering face," "face covers," "wipe," "wiping," and "wipes." Transmission: "cough," "spread," "infected," "cautious," "potential germs," "concerning," "worried," "covid test," "tested positive," and "asymptomatic." Social Distancing: "social distance," "social distancing," "six feet," "6 feet," "spaced out," "6ft," and "distanced." Racism: "racist," "xenophobia," "racism," "race," "xenophobic," "asian," and "asians." Sympathy and Support: "small business," "local business," "struggling," "support," "stress," "stressful," "suffer," "sympathy," and "stressed." Service: "takeout," "outdoor dining," "take out," "re-stocked," "restocked," "curbside pickup," "online order," "rude," and "service." Other: "covid," "pandemic," "quarantine," "covid19," "lockdown," "shutdown," and "cdc." dard bag-of-words classifiers and classifiers based on pre-trained BERT representations (Devlin et al., 2019) . Note that the student is trained using the teacher's predictions and no manually annotated reviews; in contrast to the teacher, which only considers keywords, the student can identify aspects even if no keywords appear in a review. As labeled data are expensive to obtain, such a weakly-supervised technique is promising to scale classification by leveraging unlabeled reviews (and keywords) for training (Karamanolakis et al., 2019) .Overall, we consider the following approaches:1. Random: assigns reviews to a random aspect.2. Majority: assigns all reviews to the "Non-COVID" aspect.3. Supervised bag-of-words (BoW) classifiers: represents each review as a bag of words, where words can be unigrams and bigrams. We evaluate logistic regression (LogReg) and Support Vector Machines (SVM).4. Supervised BERT: fine-tunes pre-trained BERT (Devlin et al., 2019) for supervised aspect classification.5. Weakly-supervised Teacher: classifies a review solely based on keywords (Teacher in Karamanolakis et al. (2019)).6. Weakly-supervised Student: is trained using Teacher's predictions on unlabeled data (Student in Karamanolakis et al. (2019) ). We evaluate different modeling approaches for Student, namely, BoW-LogReg, BoW-SVM, and BERT.The above techniques classify Yelp reviews into COVID aspects using either labeled data (supervised approach) or COVID-related keywords (weakly-supervised approach) for training. In addition to COVID aspect classification, we conduct time series analysis to understand how COVID aspects evolve over time, as discussed next.To understand how reviews have changed during the pandemic, we extract time series from the text of the reviews. For a given aspect (e.g., Hygiene), the corresponding time series is computed as the percentage of the reviews at each point in time that contain at least one aspect-specific keyword (see Section 4.1). We consider two approaches: time-series cross-correlation and time-series intervention analysis, as discussed next. As a first approach, we measure the correlation between the Yelp review time series and important statistics related to COVID-19, such as the number of new COVID-19 cases in the U.S or the new COVID-19 cases in NYC and LA individually. As we do not expect Yelp review time series to have a linear relationship with COVID-19 time series, we compute the Spearman's correlation metric, which only assumes a monotonic but possibly non-linear relationship between the two time series. We also measure the Pearson's correlation metric as a robustness check.As a second approach, we consider a time series intervention analysis. First, we train a timeseries model on the observations before COVID-19 (i.e., on reviews posted before March 1, 2020) and then we compare the model's predictions against the observations during COVID-19 (i.e., on reviews posted on March 1, 2020 or later). Similar to Biester et al. (2020), we consider the Prophet time-series forecasting model (Taylor and Letham, 2018) , an additive regression model that has been shown to forecast social media time series effectively. After training Prophet on the pre-pandemic data, we check to what degree its forecasts for during COVID-19 differed from the actual values. Specifically, we compute the proportion of observations outside the 95% prediction uncertainty interval produced by Prophet after March 1, 2020.By constructing Yelp review time series and comparing them to statistics related to COVID-19, we find interesting trends in reviews during the pandemic, as discussed next.
2
In this section, the general flow of the algorithm will be presented. First, we explain how we generate the substitute vectors. Then, we explain the induction procedure of morphological features. In the following subsection, we explain how we use substitute vectors and morphological features and generate word embeddings. The same flow is followed for all languages we work on.A target word's substitute vector is represented by the vocabulary of words and their corresponding probabilities of occurring in the position of the target word.(1) " Nobody thought you could just inject DNA into someone 's body and they would just suck it up." Table 1 illustrates the substitute vector of "thought" in (1). There is a row for each word in the vocabulary. For instance, probability of "knew" occurring in the position of "thought" is 9.1% in this context.To calculate these probabilities, as described in (Yatbaz et al., 2012), a 4-gram language model is built with SRILM (Stolcke, 2002) on the corpora of the target languages. For French, Hungarian, Polish and Swedish we used Europarl Corpus 1 (Koehn, 2005) . For German, CONLL-X German Corpus is used (Buchholz and Marsi, 2006b ). For Hebrew, we combined HaAretz and Arutz 7 corpora of MILA 2 (Itai and Wintner, 2008) . For the tokens seen less than 5 times we replace them with an unknown tag to handle unseen words in training and test data. We should note that these corpora are not provided to the other participants.To estimate probabilities of lexical substitutes, for every token in our datasets, we use three tokens each on the left and the right side of the token as a context. Using Fastsubs (Yuret, 2012) we generated top 100 most likely substitute words. Top 100 substitute probabilities are then normalized to represent a proper probability distribution.We should emphasize that a substitute vector is a function of the context and does not depend on the target word.In order to generate unsupervised word features, the second set of features that we use are morphological and orthographic features.The orthographic feature set used is similar to the one defined in (Berg-Kirkpatrick et al.,2010) INITIAL-CAP Capitalized words with the exception of sentence initial words.The token starts with a digit.Lowercase words with an internal hyphen. INITIAL-APOSTROPHE Tokens that start with an apostrophe.The morpological features are obtained using the unlabeled corpora that are used for the generation Figure 1 : The Flow of The Modification for Handling New Features of substitute vectors, using Morfessor defined in (Creutz and Lagus, 2005) . We will only give a brief sketch of the model used. Morfessor splits each word into morphemes (word itself may also be a morpheme) which can be categorized under four groups, namely prefix, stem, suffix, non-morpheme. The model is defined as a maximum a posteriori (MAP) estimate which maximizes the lexicon (set of morphemes) over the corpus.The maximization problem is solved by using a greedy algorithm that iteratively splits and merges morphemes, then re-segments corpus using Viterbi algorithm and reestimates probabilities until convergence. Finally, a final merge step takes place to remove all non-morphemes.For a pair of categorical variables, the Spherical Cooccurrence Data Embedding (S-CODE) framework (Maron et al., 2010) represents each of their values on a sphere such that frequently co-occurring values are positioned closely on this sphere.The input of S-CODE are tuples of values of categorical variables. In our case, these are word tokens, their substitutes, morphological and orthograpic features. We construct the tuples by sampling substitute words using substitute vectors, their corresponding morphological and orthographic features of the tokens. On each row of the co-occurrence input, there are the target token, its substitute sampled from its substitute vector, morphological and orthographic features. Tokens having the similar substitutes, morphological and orthographic features will be closely located on the sphere at the end of this process. As in (Yatbaz et al., 2012) , the dimension of the sphere is 25, in other words for each word type seen in the corpora we have a 25 dimensional vector 3 .
2
In this study, we use dictionary definitions to extract the features and relations of concepts. The aim of this study is to discover linguistic patterns that are used in the definitions to denote the specific features of concepts. In order to find regularities among the definitions, we restricted the analysis to the conceptual groups of ceramic processes and ceramic defects which are described by a limited set of features.We analyzed 222 definitions from three dictionaries of the ceramics domain: Diccionario científico-práctico de la cerámica, Diccionario de cerámica y Terminología de los defectos cerámicos [12, 13, 21] . These dictionaries are published on paper and have been digitalized [3] .From these dictionaries, we extracted the definitions of the concepts included in the categories of ceramic production processes and defects in the ceramic product. It is important for our study that the conceptual groups to be homogeneous enough to be able to identify a relevant set of features for all of them. In total, we analyzed 135 definitions of the conceptual group ceramic processes and 87 ceramic defects.The definitions in these dictionaries vary considerably as regards to the use of words and their formal structure [2] . Many of them follow the analytical model with the formula Definiendum = genus + differentia, although we can also find descriptions by means of synonyms, paraphrases, etc. In this analysis, we did not focus on the formal aspects of the definitions, but on the conceptual features that they provide. We analyzed the definitions in order to obtain a set of features which are commonly used in the descriptions of the concepts of each conceptual group. We followed the proposal of [16] which distinguishes between the name of the feature and its value. The name of the feature acts as a label that indicates the content of the feature, while the value offers specific information about a concept. For example, cause is the name and friction the value of a feature for the concept abrasion.This conceptual analysis was carried out by segmenting the information obtained from the definition and by assigning a label or code which describes the type of information that each fragment represents, as seen in Figure 1 . In order to carry out this analysis, we used the program for the qualitative analysis named Atlas.ti. This program allows us to: segment the information, assign a descriptive code, create relations between them, obtain graphic representations of the conceptual structure of data and query as in a database.The result is a list of essential features to describe each category and a set of values for each of these features. The features detected for these categories are as follows (frequency of appearance in the corpus is indicated in the parenthesis):Production process features are PROCEDURE (69), OBJECTIVE (103), PATIENT (56), MATERIAL STATE (15), INSTRUMENTS (22) , PREVIOUS STAGE (6) and NEXT STAGE (8) .Ceramic defect features are PHYSICAL ASPECT (54), ZONE (16), CAUSE (44), PHASE (15) , METHOD (5) and PRODUCT (25). We marked the linguistic expression that precedes each feature. In some cases it is possible to identify a linguistic marker that introduces a particular feature. As Figure 2 illustrates, the feature OBJECTIVE of a process is introduced into the first definition by the linguistic pattern "que sirve para" (in English "which serves to") and in the second, by the marker "con el fin de" ("for the purpose of"). In many cases, however, it was not possible to identify a linguistic marker to introduce the feature, rather recurrent syntactic structures in the expression of the feature itself. As we can see in Figure 3 , the feature PROCEDURE is expressed in both definitions with a sentence in which the main verb is in the gerund.
2
Briefly, the algorithm for the ISTFS proceeds according to the following steps. This algorithm can also find the TFSs that are in the subsumption relation, i.e., more-specific or more-general, by preparing subsumption checking tables in the same way it prepared a unifiability checking table.Unifiability Checking Table Let D (= {F 1 , F 2 , . . . , F n }) be the set of data TFSs.When D is given, the ISTFS prepares two tables, a path value table D π,σ and a unifiability checking table U π,σ , for all π ∈ Path D and σ ∈ Type. 2 A TFS might have a cycle in its graph structure. In that case, a set of paths becomes infinite. Fortunately, our algorithm works correctly even if the set of paths is a subset of all existing paths. Therefore, paths which might cause an infinite set can be removed from the path set. We define the path value table and the unifiability checking table as follows:D π,σ ≡ {F|F ∈ D ∧ FollowedType(π, F) = σ } U π,σ ≡ ∑ τ (τ∈Type ∧ σ τ is defined) |D π,τ | 2Type is a finite set of types.Assuming that σ is the type of the node reached by following π in a query TFS, we can limit D to a smaller set by filtering out 'non-unifiable' TFSs. We have the smaller set:U π,σ ≡ τ (τ∈Type ∧ σ τ is defined) D π,τU π,σ corresponds to the size of U π,σ . Note that the ISTFS does not prepare a table of U π,σ statically, but just prepares a table of U π,σ whose elements are integers. This is because the system's memory would easily be exhausted if we actually made a table of U π,σ . Instead, the ISTFS finds the best paths by referring to U π,σ and calculates only U π,σ where π is the best index path.Suppose the type hierarchy and D depicted in Figure 1 are given. The tables in Figure 2 show D π,σ and U π,σ calculated from Figure 1.In what follows, we suppose that D was given, and we have already calculated D π,σ and U π,σ .The best index path is the most restrictive path in the query in the sense that D can be limited to the smallest set by referring to the type of the node reached by following the index path in the query.Suppose a query TFS X and a constant k, which is the maximum number of index paths, are given. The best index path in Path X is path π such that U π,σ is minimum where σ is the type of the node reached by following π from the root node of X. We can also find the second best index path by finding the path π s.t. U π,σ is the second smallest. In the same way, we can find the i-th best index path s.t. i ≤ k.Suppose k best index paths have already been calculated. Given an index path π, let σ be the type of the node reached by following π in the query. An element of D that is unifiable with the query must have a node that can be reached by following π and whose type is unifiable with σ . Such TFSs (= U π,σ ) can be collected by taking the union of D π,τ , where τ is unifiable with σ . For each index path, U π,σ can be calculated, and the D can be limited to the smaller one by taking their intersection. After filtering, the ISTFS can find exactly unifiable TFSs by unifying the query with the remains of filtering one by one.Suppose the type hierarchy and D in Figure 1 are ⊥       ⊥ : CDR : CAR F 1 =    ε • • • • • • • • • {F 1 , F 2 , F 3 } • CAR: • • {F 1 } • • {F 2 } {F 3 } • • • CDR: • • • • • • • • • {F 1 , F 3 } {F 2 } CDR:CAR: • • • {F 1 } • • • {F 3 } • • • CDR:CDR: • • • • • • • • • {F 1 } {F 3 } CDR:CDR:CAR: • • • • {F 1 } • • • • • • CDR:CDR:CDR: • • • • • • • • • • {F 1 }• is an empty set. cons CAR: 6 CDR: list  In Figure 2 , U π,σ where the π and σ pair exists in the query is indicated with an asterisk. The best index paths are determined in ascending order of U π,σ indicated with an asterisk in the figure. In this example, the best index path is CDR:CAR: and its corresponding type in the query is 6. Therefore the unifiable TFS can be found by referring to D CDR:CAR:,6 , and this is {F 3 }.
2
As mentioned in the introduction section, there are two causes that can make the Seq2seq model produce profanity. First, in the training phase, Seq2seq models capture the language patterns within the training corpus. Thus, a Seq2seq model may also learn the profanity patterns from the training corpus. Second, in the testing phase, some manipulated expressions may be fed to the Seq2seq model to trigger profanity outputs from it.In the rest of this section, we present a profanityavoiding training framework with certified robustness to handle these two causes that lead to profanity. The framework has two components: the pattern-eliminating training (PET) model to barrier the profanity patterns in the training phase (Section 3.1), and the trigger-resisting training (TRT) model to maintain the robustness of the generation model against triggering expressions in the testing phase (Section 3.2). Besides, we also provide theoretical analysis to estimate the robustness of the proposed TRT model, i.e., under what attack strength (in terms of the perturbation radius), the proposed TRT model would still be certifiably robust. Consider an input-output training corpus C ={(X i , Y i )} n i=1, the learning objective function of the Seq2seq model is:EQUATIONwhere θ denotes the vector of model parameters and l S2S denotes the loss function associated with a sentence pair (X, Y ), such as cross entropy loss. As mentioned at the beginning of this section, the profanity patterns in the training set can trigger profanity. To alleviate the effect of sentences with profanity patterns, we propose an efficient and effective training method, PET. PET includes a similarity-based loss that penalizes the cases where the generated sentence's semantics is close to the semantics of the phrases in the profanity seed set. In essence, PET first generates a set of outputs sentences by perturbing the representation of the input sentence in a sentence pair. These sentences serve as diverse variants of the original output sentence. Then PET minimizes the maximum of the similarity-based loss. These two steps enhance the generalization ability of PET. (Figure 1) To implement PET, for each sample (X i , Y i ) ∈ C, we utilize the sequence model to generate a series of output sentences PC i = {Ŷ ij } m j=1 by perturbing the encoded representation of X i . With these augmented outputs and the set of seed profanity, we define the penalty term which barriers the generated outputs from the profanity as:EQUATIONwhere:EQUATIONis a hinge loss and function d(•) is a distance metric. Here, we choose cosine distance function, which is proved to be effective for quantifying the similarity of high-dimensional data samples like encoded representationsh enc , to implement d(•). d(Ŷ ij , S k )is calculated by first transforming sentencesŶ ij and S k into their vector representationsŷ ij and s k via the encoder g; and then calculating the cosine distance betweenŷ ij and s k . This hinge loss barriers the generated samples that are within ζ distance from S k . In practice, the loss is added to the conventional training loss of L S2S as the overall objective function, i.e.,EQUATIONMoreover, in this paper, we get the perturbed data PC i by adding i.i.d. noise vectors generated from von-Mises Fisher (vMF) distribution (Fisher et al., 1993) around the encoded representation of an input sentence, i.e., h enc . VMF distribution is a directional distribution over unit vectors in the space of R d . The probability density function of vMF distribution for the p-dimensional vector x is given by:f p (x; µ, κ) = C p (κ) exp κµ T x ,where µ = 1 and κ(κ > 0) are the mean direction and the concentration parameter, respectively. The mean direction µ acts as a semantic focus on the unit sphere and κ describes the concentration degree of the generated relevant highdimensional representations around it. The larger κ, the higher concentration of the distribution around the mean direction µ. The normalization constant C p (κ) =The reason that we choose to use vMF distribution to generate perturbations is two-fold. First, the vMF distribution naturally describes the cosine similarity used in this paper. Second, vMF distribution models the representation vectors from an integrated perspective instead of a single-dimensional perspective. Therefore, the perturbations on h enc tend to produce augmented representation vectors with diverse overall directions rather than minor differences in every single dimension. Note that the augmented samples can also be constructed based on other transformations such as embedding dropout (Gal and Ghahramani, 2016) . Different augmentation methods will not fundamentally impact the theoretical results in this paper.Theoretical Remarks: Compared with other conventional regularization terms like minimizing the similarity expectation, optimizing Eq. 2can be more efficient since the optimization process consists of much fewer derivative operations. Through theoretical analysis, we can regard Eq. 2as adding a gradient-norm to the conventional expectation minimization objective. Please refer to Appendix B for the detailed derivation.Besides, Eq. 2should not be confused with the adversarial training objective (Madry et al., 2017) . The major difference is that Eq. (2) does not really include an inner optimization objective. Instead, we simply pick the perturbed sample with the maximum similarity for subsequent optimization.As mentioned at the beginning of this section, apart from the profanity patterns in the training set, another cause that may result in profanity is the well-designed adversarial inputs in the testing phase, like (Cheng et al., 2020) . This section presents a theoretically-provable trigger-resisting training (TRT) method to enhance the robustness of Seq2seq models. We extend the randomized smoothing technique (Cohen et al., 2019) to get a smoothed model with the provable robustness guarantee given possible perturbations on the input sequence X. Particularly, we derive new theoretical results on using vMF distribution as random noise for randomized smoothing.Typically, perturbing input sentences are done by substituting one or more tokens in the sentences. Such a process can result in changes in the encoded representation h enc . Here we certify the robust radius via the encoded representation h enc instead of the input X. The reason here is two-fold. First, Seq2seq models typically take discrete token sequences as inputs and learn word embeddings from scratch. It is difficult for us to specify a radius measure for such sparse discrete data. Second, the possible types of modifications on the input sentence X are various, such as word replacement, adding additional text, etc. Some of these modifications are difficult to be regarded as perturbing on the embedding of single words. Nevertheless, almost all the changes are reflected in the encoded representation of the entire sentence. That is why we choose to certify the robust radius of h enc .Let us use g() to denote the decoder in the Seq2seq model. The smoothed decoder g and the base model g have the same architecture, and the parameters of their encoders are identical. Thus, given an input sentence X, their encoding result h enc are the same. Given an input X, the smoothed model outputs exactly the same sequence as g's when the modifications on the input X causes the encoded representation h enc to deviate within a radius R. Thus, a smoothed Seq2seq model enjoys certified robustness facing evasion attack samples. Formally, we can write the t-th step output from the smoothed model g (X) as:EQUATIONwhere P ( ; φ) stands for the distribution of the random noise , parameterized by φ. * is the convolution operator. g t denotes the decoding function at step t. In this section, we continue to use vMF distribution to implement the sampling distribution:EQUATIONwhere n denotes the dimension of all the input representation vectors after concatenation and t denotes the concatenated vector. S D denotes the domain of h enc and t, which are both spheres in D dimensions.With the smoothed decoder defined, now we derive the radius in which the model's robustness is guaranteed. In particular, given vMF as the random noise distribution, we can prove the following robustness guarantee for the smoothed model. For simplicity, we narrow the discussion to the generation of one specific token, i.e., the t-th token in the output, without losing generality.Input :Batched training dataset {(Xi, Yi)} N i=1 1 // Pattern-Eliminating Training; 2 for each batch {(Xi, Yi)} B i=1 in {(Xi, Yi)} N i=1 do 3 for 1 ≤ i ≤ B do 4Sample augmented outputs PCi;5 end 6Update the model parameters by minimizing Eq. (4) using the samples in{(Xi, Yi)} B i=1 and {PC} B i=1 ; 7 end 8 // Trigger-Avoiding Training; 9 for each batch {(Xi, Yi)} B i=1 in {(Xi, Yi)} N i=1 do 10Generate noise samples(j) i ∼ vMF(µ, κ) for 1 ≤ i ≤ m, 1 ≤ j ≤ B; 11Create an empty set D to store augmented samples ;12 // Iteratively construct a batch of perturbedsamples (h enc * , Y i ); 13 for 1 ≤ i ≤ B do 14Calculate the perturbed sample (h enc * , Y i ), using the noise samples generated above;15Add the perturbed samples (Xi, Y i ) to the set D;16 end 17Update the parameters of the decoder using the augmented samples in D; 18 end Theorem 3.1 (Certified Radius). Consider a specific decoding step t. The encoded representation of X is denoted as h enc . Let k * and k be the tokens that the generator returns with top and runnerup probability, i.e., k * = arg max k (g t (h enc )) and k = arg max k,k =k * (g t (h enc )). For any perturbations on h enc that is within radius R, the output of g t (X) is unchanged, i.e., g t (h enc ) = g t (h enc + ) for all within R from h enc . Here, R is calculated as:EQUATIONwhere g t,k * (h enc ) and g t,k (h enc ) are the probability of generating k * and k in step t, respectively.We leave the detailed proof of Theorem 3.1 in the Appendix A.On getting the radius R, we now follow existing work like (Cohen et al., 2019; Yang et al., 2020; Salman et al., 2019) and present the practical training method to get the smoothed model g . Since the method is tightly coupled with the PET, we illustrate the overall training framework that involves both strategies in Algorithm 1.In Algorithm 1, we first conduct PET on by minimizing Eq. (4) (line 1-7). After that, we adopt TRT to update the smoothed Seq2seq model's de-coder, which is built upon the base model, using the augmented samples (line 8-18). The encoder here is not updated so that the encoded representations remain stable. These augmented samples are generated to imitate a testing phase attack against the smoothed Seq2seq model so that the model is trained to be more robust. (line 13-16) Here, for a sentence pair (X i , Y i ), we describe the augmented sample as (h enc i , Y i ), since we can consider h enc i as fixed in this phase. Now, consider an augmented sample (h enc i , Y i ) and a specific decoding step t. From the perspective of an attacker, we wish to find a perturbed representation h enc * that maximize the loss of generating the ground truth output Y it (i.e., the t-th token in the output sequence Y i . The perturbed representation should be within a ball around h enc measured by the distance metric d. Thus, ideally, the augmented samples should satisfy:EQUATIONwhere L(h enc * , Y it ) is derived from Eq. (4) by replacing the encoding network with the encoded representation h enc . Finally, we use the augmented samples to update the decoder of the Seq2seq model.
2
In order to compute the lexical complexity metric, token frequency, morphology and orthography were included as our variables. Below, the methods for computing the values for each of these variables are discussed.In order to compute our metric for Malayalam, we first obtained a corpus from the Leipzig Corpora Collection containing 300,000 sentences from Malayalam Wikipedia articles and 100,000 sentences from Malayalam news crawl (Goldhahn et al., 2012) . The corpus was then preprocessed by removing punctuation and special characters, and then tokenized using whitespace. The text was also normalized to remove inconsistencies in spelling using the Indic NLP Library 1 and this resulted in 4,711,219 tokens and 762,858 unique types.The corpus was used to collect counts for each word and then scaled them between 0 and 1, which was then inverted such that the most frequent tokens have a value closer to 0 and the less frequent tokens will have a value approaching 1. This score indicated the relative frequency of each word in this corpus, and the idea that highly frequent words are much easier to process than those that have lower frequency.Our morphology metric required us to obtain information about the root and the morpho-logical affixes for a given word. Given the rich morphology and compounding processes in the language, we had to make use of a two-step process to compute our scores.First, SandhiSplitter (Devadath et al., 2014) was used to split tokens that are compound words into their constituent component words. For example, consider the compound word കാരണമായിരി ണം (kAraNamAyirikkaNaM) കാരണമായിരി ണം ⇒ കാരണം + ആയിരി ണം kāraṇamāyirikkaṇaṁ⇒ kāraṇaṁ+ āyirikkaṇam" must be the reason" ⇒ "reason" + "must be"As a second step, these results were passed through IndicStemmer 2 , a rule-based stemmer for Malayalam, which further decomposed the words into stems and affixes. As an example, the word േലഖന െട (lēkhanaṅṅaḷuṭe) meaning "Of articles". is decomposed into the stem േലഖനം (lēkhanam) meaning article with the suffix -ൾ ( ṅṅal) indicating plural and --ുെട (uṭe) indicating the Genitive case. In our metric we only considered suffixes as in Malayalam usually contains always suffixes being added to the end of the stem.After this two-step process, we are able to obtain the stems and suffixes for a given word.By simply summing the number of stems and suffixes, the total number of morphemes contained in each word is computed. For example, the word സ ി ം (sampatsamr d'dhiyum) meaning "prosperity" is a compound word split into constituent words സ ് (sampatt) meaning "richness" and സ ി ം (samr d'dhiyum) meaning "and plentiful". സ ി ം (samr d'dhiyum) is further stemmed to stem word സ ി (samr d'dhi) meaning "plentiful" and suffix -ും (um) meaning "-and". സ ് (sampatt) is a root word. Thus, the number of morphemes in this case is three, counting the two stems and one suffix.Based on this pre-processing, we then calculate the total number of morphemes for each whole word and then scale this number between 0 and 1 to give a morpheme score. We note that there could be several different ways to compute the morpheme score, as affixes themselves are not all alike. In this preliminary study, it was not immediately apparent how the differing costs for various affixes could be calculated. Additionally, fine-grained information regarding the morphological properties of the affixes (e.g. whether they were inflectional or derivational) was not easily obtained with existing tools and resources. In future work, we plan to explore this possibility by enhancing the morphological analyzer's output.Malayalam is an alphasyllabic writing system that has its source in the Vatteluttu alphabet from the 9 th century. Its modern alphabets have been borrowed from the Grantha alphabet. It consists of 15 vowels and 36 consonant letters.We devised a script score based on complexity of the script in the following three ways:-In the alpha-syllabic script of Malayalam, vowels may either appear as letters at the beginning of a word or as diacritics. Consonants themselves are understood to have an inherent schwa, which is not separately represented. The diacritics will appear either left or right of the consonant it modifies. If it appears to the left, there will be a discrepancy in the phonemic and the orthographic order, as the vowel will always be pronounced after the consonant, but read before the consonant actually appear in the text. For example:ക +െ◌ = െക ka + .e = keHere the vowel violates the order in which it is spoken. Similarly: ക +േ◌ = േക (ka + ē = kē), as seen in േകൾ ക (kēḷkkuka) meaning "hear". Such inconsistencies in spoken and visual order have been shown to incur a cost in Hindi word recognition (which is also an alpha-syllabic script) (Vaid and Gupta, 2002) .In order to capture the lexical processing cost for such a discrepancy, we give a penalty of 1 every time it occurs in the word.In Malayalam, the diacritic may also appear above or below a consonant. In such a case, we we give a penalty of 0.5 to the word. For example the symbol ◌് also known as virama is used to replace the inherent schwa sound of consonants with ŭ. As in ക + ◌് = ക് (ka + virama = ku)A penalty of one is assigned for every two letters that form a composite glyph. For example: മ ി (mantri) = മന് + ി (man + tri) where the new composite glyph is (ntra). With the above complexity rules in place, the total penalty cost for each whole word is calculated. Then the total penalty for each word is scaled linearly to between 0 and 1 to give us an orthographic score.In order to evaluate our lexical complexity metric, we used a lexical decision task paradigm to collect reaction times for a sample of Malayalam words. More complex words would result in longer reaction times, and vice versa. This would help us evaluate whether our lexical complexity model could predict reaction times for the given set of words. We used a well-understood experimental paradigm in the form of a lexical decision task. In such a setup, a participant will see a word stimuli on a screen which they have to classify as either a word or a non-word using a button press. The response time (RT) is calculated from the point the word appears on the screen to the point where the participant presses the response button.Our task consisted of a balanced set of 50 Malayalam words and 50 pseudowords. Pseudowords follow the phonotactics of the language, but have no lexical meaning (i.e. are not legitimate words). In order to select words for the task, two sets of 25 words were randomly sampled from the unique tokens obtained from the Leipzig Corpus. The first set was randomly sampled from words with a frequency score between the range of 0.1 to Figure 1 : Stimuli word shown for 2500ms. The first word is a proper Malayalam word ("vivaraṅṅaḷ" meaning "information") hence the correct response is to press the 'a' key. The second word is non-word (vamittam) and therefore, the correct response is to press 'l' key.0.4 to obtain high frequency words as calculated by the metric. The second set was chosen similarly but with frequency score between the range of 0.7 to 0.9 to yield low frequency words. If the sampled word turned out to be an English word written in Malayalam or happens to be a proper noun, it was replaced with another until both sets had 25 words each.The pseudowords were constructed in keeping with the phonotactics of Malayalam. Both the pseudowords and the valid words were constrained in length between 6 and 14 characters. Note that we do not take into consideration the reaction times for the pseudowords; they are simply distractors for the participants.Participants included 38 students from S.N. College, Kerala, who volunteered for the study. Participants included 20 females and 18 males between the ages of 18 and 23 (mean age of 19.7). All participants were native speakers of Malayalam and had formal education in Malayalam upto grade 10. Participants were tested individually on a computer running the lexical decision task on the JsPsych stimulus presentation software (De Leeuw, 2015) . Each participant was asked to press either the 'a' key or the 'l' key for word and non-word respectively. The order of words and pseudowords was randomized for each participant. Participants were instructed to read the word presented and respond with the appropriate button press. Each trial consisted of a word that was presented for 2500ms. A fixation cross was placed in the center for 1600ms between each trial. The first 10 trials were practice trials from a word set different from the study. This enabled participants to get familiarized with the task.
2
In the following sections, we begin with formally introducing the problem statement. We then outline the embeddings used in KG-CRUSE. Following this, we discuss the architecture of KG-CRUSE as illustrated in Figure 2 . Finally, we describe decoding process used by KG-CRUSE during the inference step.We describe the problem statement similar to Moon et al. (2019) . The KG is defined as G =V KG × R KG × V KG , where VKG is set of entities and R KG is set of relations in the KG. Facts in the KG are denoted by triples, and each has the form (e, r, e') where e, e' ∈ V KG are entities and r ∈ R KG is the relation connecting them.In addition to the KG, each input contains a dialogue D ∈ D, represented as a sequence of utterances D = {s 1 , ..., s n }, and the set of entities x e = {x (i) e } occurring in the user's last utterance s n , where x (i) e ∈ V KG . The output is represented as y = {y e , y r }, where y e is a set of entity paths y e = {y r = {y (i) r,t } T t=1is a sequence of relations from the KG connecting x Capturing the semantic information in the dialogue context is an important component of our model. SBERT is a contextual sentence encoder that captures the semantic information of a sentence in a fixed-size vector. We encode pieces of text using Equation 1. The text is first sent though a pretrained BERT model to obtain the contextual representation of its tokens. The sentence embedding is computed by taking a mean-pool of the contextual token representations. The dialogue context is constructed by concatenating a maximum of three previous utterances and is then passed through SBERT encoder to obtain a fixed-size contextual dialogue representation.EQUATIONIn order to align the semantic vector space of the dialogue representations and the KG representations, we use SBERT to encode the KG elements. As KG entities and relations can be words or phrases, SBERT can effectively capture their semantic information. We use the publicly available SBERT-BERT-BASE-NLI 2 model with meanpooling as our SBERT encoder.KG-CRUSE learns to traverse a path on the KG by learning a function π θ that calculates the probability of an action a t ∈ A t given the state s t . The state s t contains the dialogue history and entities already traversed by KG-CRUSE while decoding the paths, while a t is the set of edges from the KG available to KG-CRUSE for extending its path.The state s t at step t is defined as a tuple (D, (r 1 , e 1 , ..., r t−1 , e t−1 )), where D is the dialogue context and r i , e i (i < t) are the relation and entity already decoded by KG-CRUSE at step i. The initial state s 0 is denoted as (D, ∅), where ∅ is the empty set.At step t, an action has the form a t = (r t , e t ) ∈ A t , where A t is the set of all possible actions available to the model at step t. A t includes all outgoing edges of e t−1 in the KG G, i.e. A t is the set of all the outgoing edges of the entity decoded by KG-CRUSE at timestep t − 1. To let the agent terminate the search process, we add self-loop edges to every entity node in the graph denoting no operation ("self-loop"). The action a t is represented as a concatenation of the relation and entity embedding a t = [r t ; e t ], where r ∈ R dr , e ∈ R de and R de , R dr are the size of the entity embedding and relation embedding respectively. At step 1, KG-CRUSE chooses between the entities mentioned in s n for path traversal. The relation associated with action at step 1 is the zero vector. As mentioned, the state s t contains the dialogue context and action history (path history). This sequential information in s t is modelled using an LSTM:d = W d D (2) h 0 = LSTM(0, d) (3) h t = LSTM(h t−1 , a t−1 ), t > 0 (4)where D is the contextual dialogue embedding obtained using Equation 1 and W d is a learnable matrix that maps the dialogue embedding to the LSTM input dimension. Given the hidden state representation h t at time t, KG-CRUSE assigns a probability to each action using Equation 6.xEQUATIONThe hidden state representation h t is passed through a two-layered dense network with ReLU activation (Nair and Hinton, 2010) in the first layer. The LSTM weights, W 2,θ ∈ R d h ×d h and W 3,θ ∈ R (dr+de)×d h are the learnable parameters, and d h is the LSTM hidden representation size.We train KG-CRUSE by minimising the crossentropy loss on the entities decoded at each timestep. Additionally, we train the model using teacher forcing (Sutskever et al., 2014) , wherein the model makes each action conditioned on the gold history of the target path. To prevent overfitting, we add L 2 regularisation to the parameters of the model. During training, we do not fine-tune the SBERT architectures, but back-propagate the gradients to the entity and relation embeddings.Once the model is trained, KG-CRUSE takes the dialogue history and the entities mentioned in the current utterance as input, a horizon T and outputs a set of entity paths, relations paths of length T along with the probability score of each path. During inference, we remove self-loops from the KG except for the self-loop with label "self-loop" introduced in section 3.3. We do so to allow the agent traverse diverse paths rather than staying at entities mentioned in the dialogue history.
2
Our proposed framework includes an encoder and a generator, as shown in Figure 2 . The encoder takes the citation network and the citing and cited papers as input, and encodes them to provide background knowledge and content information, respectively. The generator contains a decoder that can copy words from citing and cited paper while retaining the ability to produce novel words, and a salience estimator that identifies key information from the cited paper. We then trained the framework with citation function classification to enable the recognition of why a paper was cited.Our encoder (the yellow shaded area in Figure 2 ) consists of two parts, a graph encoder that was trained to provide background knowledge based on the citation network, and a hierarchical RNN-based encoder that encodes the content information of the citing and cited papers.We designed a citation network pre-training method for providing the background knowledge. In detail, we first constructed a citation network as a directed graph G = (V, E). V is a set of nodes/papers 2 and E is a set of directed edges. Each edge links a citing paper (source) to a cited paper (target). To utilize G in our task, we employed a graph attention network (GAT) (Veličković et al., 2018) as our graph encoder, which leverages masked self-attentional 2 We use node and paper interchangeably layers to compute the hidden representation of each node. This GAT has been shown to be effective on multiple citation network benchmarks. We input a set of node pairs {(v p , v q )} into it for training of the link prediction task. We pre-trained our graph encoder network using negative sampling to learn the node representations h n p for each paper p, which contains structural information of the citation network and can provide background knowledge for the downstream task.Given the word sequence {cw i } of the citing sentence's context and the word sequence {aw j } of the cited paper's abstract, we input the embedding of word tokens (e.g., e(w t )) into a hierarchical RNNbased encoder that includes a word-level Bi-LSTM and a sentence-level Bi-LSTM. The output wordlevel representation of the citing sentence's context is denoted as {h cw i }, and the cited paper's abstract is encoded similarly as its word-level representation {h aw j }. Meanwhile, their sentence-level representations are represented as {h cs m } and {h as n }.Our generator (the green shaded area in Figure 2 ) is an extension of the standard pointer generator (See et al., 2017) . It integrates both background knowledge and content information as context for text generation. The generator contains a decoder and an additional salience estimator that predicts the salience of sentences in the cited paper's abstract for refining the corresponding attention. The overall architecture of the proposed framework. The encoder is used to encode the citation network and papers' (both citing and cited) text. The generator estimates salience of sentence in the cited paper's abstract, and utilizes this information for text generation. The framework is additionally trained with citation functions.The decoder is a unidirectional LSTM conditioned on all encoded hidden states. The attention distribution is calculated as in (Bahdanau et al., 2015 ).Since we considered both the citing sentence's context and the cited paper's abstract on the source side, we applied the attention mechanism to {h cw i } and {h aw j } separately to obtain two attention vectors a ctx t , a abs t , and their corresponding context vectors c ctx t , c abs t at the step t. We then aggregated input context c * t from the citing sentence's context, the cited paper's abstract, and background knowledge by applying a dynamic fusion operation based on modality attention as described in (Moon et al., 2018b,a) , which selectively attenuated or amplified each modality based on their importance to the task:EQUATIONEQUATIONEQUATIONwherec net t = [h n p ; h n q ]represents the learned background knowledge for papers p and q, and is kept constant during all decoding steps t, and [att ctx ; att abs ; att net ] is the attention vector.To enable our model to copy words from both the citing sentence's context and the cited paper's abstract, we calculated the generation probability and copy probabilities as follows:EQUATIONwhere p gen is the probability of generating words, p copy1 is the probability of copying words from the citing sentence's context, p copy2 is the probability of copying words from the cited paper's abstract, s t represents the hidden state of the decoder at step t, and e(w t−1 ) indicates the input word embedding. Meanwhile, the context vector c * t , which can be seen as an enhanced representation of source-side information, was concatenated with the decoder state s t to produce the vocabulary distribution P vocab :P vocab = softmax(V (V[s t ; c * t ] + b) + b ). (5)Finally, for each text, we defined an extended vocabulary as the union of the vocabulary and all words appearing in the source text, and calculated the probability distribution over the extended vocabulary to predict words w:EQUATIONThe estimation of the salience of each sentence that occurs in a cited paper's abstract was used to identify what information needed to be concentrated for the generation. We assumed a sentence's salience to depend on the citing paper such that the same sentences from one cited paper can have different salience in the context of different citing papers. Hence, we represented this salience as a conditional probability P (s i |D src ), which can be interpreted as the probability of picking sentence s i from a cited paper's abstract given the citing paper D src . We first obtained the document representation d src of a citing paper as the average of all its abstract's sentence representations. Then, for calculating salience, which is defined as P (s i |D src ), we designed an attention mechanism that assigns a weight α i to each sentence s i in a cited paper's abstract D tgt . This weight is expected to be large if the semantics of s i are similar to d src . Formally, we have:EQUATIONEQUATIONwhere h as i is the i th sentence representation in the cited paper's abstract, v, W doc , W sent and b sal are learnable parameters, andα i is the salience score of the sentence s i .We then used the estimated salience of sentences in the cited paper's abstract to update the wordlevel attention of the cited paper's abstract {h aw j } so that the decoder can focus on these important sentences during text generation. Considering that the estimated salienceα i is a sentence weight, we determined each token in a sentence to share the same value ofα i . Accordingly, the new attention a abs t of the cited paper's abstract became a abs t = α i a abs t . After normalizing a abs t , the context vector c abs t was updated accordingly.During model training, the objective of our framework covers three parts: generation loss, salience estimation loss, and citation function classification.The generation loss was based on the prediction of words from the decoder. We minimized the negative log-likelihood of all target words w * t and used them as the objective function of generation:EQUATIONTo include extra supervision into the salience estimation, we adopted a ROUGE-based approximation (Yasunaga et al., 2017) as the target. We assume citing sentences to depend heavily on salient sentences from the cited papers' abstracts. Based on this premise, we calculated the ROUGE scores between the citing sentence and sentences in the corresponding cited paper's abstract to obtain an approximation of the salience distribution as the ground-truth. If a sentence shared a high ROUGE score with the citing sentence, this sentence would be considered as a salient sentence because the citing sentence was likely to be generated based on this sentence, while a low ROUGE score implied that this sentence may be ignored during the generation process due to its low salience. Kullback-Leibler divergence was used as our loss function for enforcing the output salience distribution to be close to the normalized ROUGE score distribution of sentences in the cited paper's abstract:EQUATIONEQUATIONwhereα, R ∈ R m , R i refers to the scalar indexed i in R (1 ≤ i ≤ m), and r(s i ) is the average of ROUGE-1 and ROUGE-2 F1 scores between the sentence s i in the cited paper's abstract and the citing sentence. We also introduced a hyper-parameter β as a constant rescaling factor to sharpen the distribution.We added a supplementary component to enable the citation function classification to be trained with the generator, aiming to make the generation conscious of why to cite. Following a prior general pipeline of citation function classification Zhao et al., 2019) , we first concatenated the last hidden state s T of the decoder, which we considered as a representation of the generated citing sentence, with the document representation d ctx of the citing sentence's context. Here, d ctx was calculated as the average of its sentence representations. We then fed the concatenated representation into an MLP followed by the softmax function to predict the probability of the citation functionŷ func for the generated citing sentence. Cross-entropy loss was set as the objective function for training the classifier with the ground truth label y func , which is a one-hot vector:EQUATIONwhere N refers to the size of training data and K is the number of different citation functions. Finally, all aforementioned losses were combined as the training objective of the whole framework:EQUATIONwhere λ S and λ F are the hyper-parameters to balance these losses.
2
The XLNet (Yang et al., 2019 ) is a transformer-based machine learning method for Natural Language Processing tasks. It is famous for a generalized autoregressive pretraining method which is one of the most significant emerging models of NLP. The XLNet consists of the recent innovations in NLP, stating the solutions and other approaches regarding language modelling. XLNet is also known for the auto-regressive language model that promotes joint predictions over a sequence of tokens on transformer design. It aims to find the possibility of a word token's overall alterations of word tokens in a sentence.The language model comprises two stages, the pretrain phase and fine-tune phase. XLNet mainly concentrates on the pre-train phase. Permutation Language Modeling is one of the new objectives which is implemented in the pre-train phase. We used "hfl/chinesexlnet-base" as a pre-trained model (Cui et al., 2020) for Chinese data that targets enhancing Chinese NLP resources and contributes a broad category of Chinese pre-trained model selection.Initially, the dataset is preprocessed, and the generated tokens are given input to XLNet pre-trained model. The model trains the data over 20 epochs and further goes through a mean pool, passing through a fully connected layer for fine-tuning and classification, and predicts the data over a given test set. Fig. 2 shows the architecture of the XLNet model.The dataset contains fields like "text" and "label" only, extra attribute "id" is added to the dataset for better preprocessing. Also, the noisy information from the dataset has been filtered out by using the "tweet-preprocessor" library. After preprocessing the dataset with the first few lines is shown in Fig. 3 . Tokenization breaks down a text document into a phrase, sentence, paragraph, or smaller units, such as single words. Those smaller units are said to be tokens. All this breakdown happens with the help of a tokenizer before feeding it to the model. We used "XLNetTokenizer" on the pre-trained model, as the models need tokens to be in an orderly fashion. The tokenizer imports from the "transformers" library. So, word segmentation can be said to break down a sentence into component words that are to be feed into the model.A pre-trained model is used to classify the text, where an encoder subnetwork is combined with a fully connected layer for prediction. Further, the tokenized training data is used to fine-tune the model weights. We have used "XLNetForSequenceClassification" for sequence classification. It consists of a linear layer on the pooled output peak. The model targets to do binary classification on the test data.
2
We experiment with several machine learning approaches and features. Before using the tweets in decision making, we also apply a simple preprocessing on them. In the following, we briefly outline these.We use the ArkTokenizer (Gimpel et al., 2011) to tokenize the tweets. In addition to tokenization, we do lowercasing and remove digits if available in text.We extract nine features for each tweet and divide them into Structural, TF-IDF and Embeddings features.Emoticons: We extract all the emoticons from the training data and use them as a binary feature, i.e. does a tweet contain a particular emoticon or not.Interjections: We use existing list of interjections 7 and use them similar to Emoticons as binary feature.Lexicons: We use existing positive and negative lexicons 8 and use them as a binary feature.We use the textblob 9 tool to compute sentiment score over each tweet. The score varies between -1 (negative) to 1 (positive).This feature includes 36 different pos-tags (uni-gram) and are used as binary features.Significant terms: Using tf-idf values we also extract the top 300 terms (uni-gram and bi-gram, 300 in each case) from the training data and use them as binary features. Note, we extract for good and bad news separate uni-grams and bi-grams.Tweet Characteristics: This feature contains tweet specific characterstics such as the number of favorite counts, tweet replies count and number of re-tweets.In this case, we simply use the training data to create a vocabulary of terms and use this vocabulary to extract features from each tweet. We use tf-idf representation for each vocabulary term.Finally, we also use fasttext based embedding (Mikolov et al., 2018) vectors which are trained on common crawl having 600 billion tokens.We investigate 8 classifiers for our task including Multi-Layer Perceptron (MLPC), Support Vector Machine with linear (LSVC) and rbf (SVC) kernel, K Nearest Neighbour (KNN), Logistic Regression (LR), Random Forest (RF), XGBoost (XGB) and Decision Tree (DT). In addition, we also fine-tune BERT-base model (Devlin et al., 2018) . Each classifier, except the BERT, has been trained and tested on each possible combination of the three feature types.
2
In this section, we describe the representations used for comparison in our experiments. Four methods of each modality were chosen. For texts, we used: (1) Latent Semantic Indexing (Landauer et al., 1998) and (2) Latent Dirichlet Analysis (Blei et al., 2003) for topical representation, and (3) Word2Vec (Mikolov et al., 2013) and (4) Global Vectors for word representation (Pennington et al., 2014) . For images, we applied: (1) Scale-Invariant Feature Transform (Lowe, 2004) , (2) Speeded-Up Robust Features (Bay et al., 2008) , (3) Oriented FAST and Rotated BRIEF (Rublee et al., 2011) for Bag-of-Visual-Words (Yang et al., 2007) vector generation, and (4) neural features obtained from the VGG19 pre-trained network (Simonyan and Zisserman, 2014).Textual distributed representation is based on the aforementioned distributional hypothesis (Harris, 1954) which states that similar statistical distributions denotes semantic similarity between two items. In the textual domain, this means that words that appear in the same context have similar semantic meaning. In this paper, we review methods of textual representation from two ways: one by topic representation, obtaining vectors that encode the pertinence of words in sentences if they appear together in a corpus, and one by word embeddings, creating word representations based on their occurrence context and combining them in a single document vector. Thus, the methods for textual representation investigated in this paper are:• Latent Semantic Indexing (LSI) (Landauer et al., 1998) : This method uses the term-document matrix that encodes the frequency of each term by document and its eigenvalues and eigenvectors to decompose this data and find a new representation. This is accomplished by Singular Value Decomposition, selecting only the highest eigenvalues and their correspondent eigenvectors to recompose the term-document matrix, leading to a reduced topic-document matrix.• Latent Dirichlet Analysis (LDA) (Blei et al., 2003) : Similar to LSI, this method codifies a topical distribution of words using a term-document matrix. But, instead of matrix operations to simplify the original data, it uses probability and parameter estimations to find word-topic and topic-document distributions.• Word2Vec (W2V) (Mikolov et al., 2013) : This neural model uses a shallow neural network to create distributed representations based on the context of each word. This model can be trained in two ways: training the model to find the context of a given word (Skipgram) or find the central word of a given context (CBoW). Either way, the model codifies a representation of a word in its hidden layer. The simplest way of codifying a document vector from its words is to add all present vectors and divide them by the number of words in it, as the semantic information is kept on this vector combination. This document vector performs poorly in large texts, as it loses semantic information shared between contexts the same way as the standard Bag-of-Words representation does. Another way of codification of document vectors that circumvents the aforementioned limitation is Doc2Vec (Le and Mikolov, 2014) , adding a document identification token to each context and then calculating its embedding.• Global Vectors for Word Representation (GloVe) (Pennington et al., 2014 ): This statistical model uses a term co-ocurrence matrix and ratios of coocurrence to find word vectors with distances between words relative to their co-ocurrence ratios. In the example given by the proposing paper, the ratio of P(solid|ice)/P(solid|steam) is much higher than 1, meaning that "solid" is more semantically related to "ice" than "steam". The objective function of this model reflects this ratio in the distances between these word vectors.A digital image is represented by a matrix of pixels (tuples of numbers with intensities of particular color channels). Albeit great for visualization, this matrix often has highly correlated neighboring pixels, creating redundant data. To extract meaningful mathematical information from an image, we must first find regions of interest that uniquely define it, and map these to a vector space where they can be compared. This paper uses two different ways to generate features from an image: hand-crafted descriptors, which are predefined ways to find regions of interest and encode them into vectors; and neural features, automatically extracted and selected by an ImageNet pre-trained neural network. Thus, the methods for visual representation investigated in this paper are:• Scale-Invariant Feature Transform (SIFT) (Lowe, 2004) : This method extracts features that are invariant to scale, illumination and rotation. It is composed of four main steps: (1) keypoint extraction, via Differences of Gaussians in different scales;(2) keypoint localization, to refine and filter extracted keypoints;(3) orientation assignment, for each keypoint to achieve rotation invariance; (4) keypoint description, using the histogram of gradients in the neighborhood of the keypoint to encode a 128-position vector.• Speeded-Up Robust Features (SURF) (Bay et al., 2008) : As an extension of the SIFT method, SURF follows the same steps of SIFT applying different mathematical methods. For keypoint extraction and localization, the Differences of Gaussians are replaced by Box Filters and Hessian Matrix determinants; for orientation assignment and keypoint description, SURF uses Haar Wavelet responses around the keypoint. SURF achieves similar results to SIFT, generating smaller vectors (64-positions) with improved speed.• Oriented FAST and Rotated BRIEF (ORB) (Rublee et al., 2011) : This method is a fusion of two descriptors: FAST (Rosten and Drummond, 2006) and BRIEF (Calonder et al., 2010) . ORB uses the FAST keypoint extraction method, achieves rotation invariance by analyzing the weighted centroid of intensities around each keypoint, and then combines this orientation with the BRIEF descriptor by prior rotation of the pixels in the described keypoint neighborhood.• VGG19 classes (Simonyan and Zisserman, 2014) : This method uses the classes detected from the VGG19 model pre-trained using the images from Ima-geNet, creating a probability vector of 1000 positions, probabilities of specific objects in a scene.To combine multiple data representations in a simple way, we can use feature concatenation prior to classification (Wang et al., 2003) , create ensembles of single-mode classifiers (Radová and Psutka, 1997) , align features of singlemode representations by similarity measures (Frome et al., 2013) or co-learn single-modality representations using another modality as basis (Information Resources Management Association, 2012, Chapter 28). But these representations do not encode a real fusion between two modalities in a single mathematical vector space, as they only add information to an existing space or combine inferences from separate spaces in a shallow way.In this paper, we use both early-stage feature fusion with concatenation (Wang et al., 2003) and model-based feature fusion, creating a single vector space that maps both modalities to a single vectorial space. In this paper, we will use a simple multimodal autoencoder framework with singlemode pre-training, in a similar architecture to the one proposed in Ngiam et al. (2011):1. Two deep autoencoders are pre-trained with single modalities until convergence;2. The weights are then ported to a multimodal deep autoencoder with one (or more) shared hidden layers that will codify our multimodal representation;3. Training will have instances that will try to reconstruct two modalities from one, reconstruct one modality from two, and reconstruct one modality based on the other.The architecture described in Vukotić et al. (2016) (BiDNN) will also be used to compare results between the multimodal approaches. The simple autoencoder described above was also compared in Vukotić et al. (2016) , and we replicate this experiment with our dataset.
2
As our starting dataset we use TED-Multilingual Discourse Bank (MDB) (Zeyrek et al., 2018) . It consists of transcripts of six scripted presentations from the TED Talks franchise, in multiple languages, but we will use only the English portion (6975 words total). Zeyrek et al. annotated these transcripts with discourse relations, in the style of PDTB (Prasad et al., 2019) , and we will rely on this for some analysis in section 5.. Earlier pilots we conducted relied on unscripted spoken dialogues from the DISCO-SPICE corpus (Rehbein et al., 2016) , but these transcripts were too hard to follow for our participants. Relying on the scripted presentations of TED-MDB avoided this problem while still remaining in the realm of reasonably naturalistic spoken text.Our contribution is to extend this existing dataset with elicited questions. Our procedure consists of two phases: the elicitation phase where we ask people to read a snippet of text and enter a question it evokes, then read on and indicate whether the question gets answered and how, and a comparison phase where we ask people to indicate which of the elicited questions are semantically/pragmatically equivalent, or more generally how related they are. The second phase is necessary because in the first phase we elicit questions in free-form, and what counts semantically/pragmatically as 'the same question' can be operationalized in many different ways. We will describe each phase in turn.Elicitation phase For the elicitation phase, texts were cut up into sentences (using NLTK's sentence tokenizer), and long sentences only (> 150 words) were further cut up at commas, colons or semicolons by a simple script. 2 For convenience we will refer to the resulting pieces of text as sentences. Our aim was to fully cover the TED-MDB texts with evoked questions, by eliciting evoked questions after every sentence. We decided to present excerpts of these texts instead of full texts, because we wanted our approach to be able to scale up to (much) longer texts in principle, Figure 1 : A view of our elicitation tool, here asking whether a previously entered question has been answered yet.and in order to keep annotators fresh. We presented each participant with up to 6 excerpts from different source texts (more would have made the annotation task too long), each excerpt comprising up to 18 sentences (a trade-off between having enough context and keeping annotators fresh). Each excerpt was incrementally revealed, with a probe point every 2 sentences. To still get full coverage of the texts we alternated the locations of probe points between participants. In this way we covered the 6975 words of TED-MDB with a total of 460 probe points.At each probe point participants were asked to enter a question evoked by the text up to that point, and, for previously unanswered questions evoked at the previous two probe points, they were asked whether the question had been answered yet by choosing a rating on a 5-point scale from 1 'completely unanswered' to 5 'completely answered' (henceforth ANSWERED). We limited the number of revisited questions to 2 in order to avoid breaking the flow of discourse too much and to prevent the task from becoming too tedious, although this may mean that we will miss some answers. (However, in a pilot study we found that questions that weren't answered after the first probe point wouldn't be answered at the next two probe points either.) The formulation asking for evoked questions was: "Please enter a question the text evokes for you at this point. (The text so far must not yet contain an answer to the question!)". The screen for indicating answers is shown in figure 1.The decision to present only excerpts, and to check question answeredness only for two subsequent chunks, make scalable annotation by non-experts feasible. However, this biases our approach towards questions that reflect only 'local' discourse structure. This restriction must be kept in mind, but note that our approach shares this locality for instance with the discourse relations approach, and accordingly with the existing annotations of TED-MDB on which we will rely further below. For a detailed overview of our elicitation phase and more reflection on design decisions such as these, we refer to an earlier report (Westera and Rohde, 2019). For both questions and answers, participants were asked to highlight the main word or short phase in the text that primarily evoked the question, or provided the answer, respectively. They did this by dragging a selection in the newest two sentences of the excerpt, and could highlight at most 10 words. The motivation behind this word limit was that it would force annotators to be selective, thus making their highlights more informative (we want only the most important words, even if without context these would not suffice to evoke the question or provide the answer in full). Highlights for different questions were given different colors, and highlights for answers were given the same color as the question they were answers to.We set up this task in Ibex (Internet-based experiments, https://github.com/addrummond/ibex/), hosted on IbexFarm (http://spellout.net/ ibexfarm/), and recruited 111 participants from Amazon Mechanical Turk (https://www.mturk.com/). 3 Each participant could do the task once. We estimated that the task would take about 50 minutes, and offered a monetary compensation of $8.50. We aimed to have at least 5 participants for every probe point, but because we let the excerpts overlap many probe points have more than that. For an overview of these basic numbers (as well as the resulting data, discussed in the next section) see Table 1 .Comparison phase The goal of the comparison phase, recall, was to establish a notion of inter-annotator agreement on the (free-form) questions we elicited, by gathering judgments of question relatedness/equivalence. For this, we set up a task in the Mechanical Turk interface directly. A screenshot is shown in figure 2. We published tasks of 10 snippets of around 2 sentences, each followed by an exhaustive list of the questions we elicited at that point. In each task one of these questions was designated the 'target question', the others 'comparison questions', and participants were asked to compare each comparison question to the target question. Questions were rotated through the 'target question' position, so for every pair of questions we would get the same number of comparisons in either order. For each comparison our participants were instructed to select one of the following options (the Venn-diagram-like icons from left to right in the image):• Equivalence: Target and Comparison question are asking for the same information, though they may use very different words to do so.• Overlap: Target and Comparison question are slightly different, but they overlap.• Related: Target and Comparison question are quite different, no overlap but still closely related.• Unrelated: Target and Comparison question are very different; they are not closely related.• Unclear: Target and/or Comparison question are unclear.In addition to these descriptions, we instructed participants that what we were after is "what kind of information the questions are asking for, not how they are asking it", with the advice to look beyond superficial appearance, to interpret the questions in the context of the text snippet, and that if two questions invite the same kinds of answers, they count as the same kind of question.We estimated that each task would take around 4 minutes and offered a reward of $0.90. We limited participants to doing at most 20 tasks per person (each task consisting of 10 snippets) to ensure diversity. We ended up recruiting 163 workers. For these basic numbers (as well as numbers of the resulting data, discussed next), see again Table 1 .
2
Our research takes a similar approach as Kim and Pantel (2006) and Xiong and Litman (2011) on feature extraction and machine-learning, while looking at a closed system without clear "wisdomof-the-crowd" indicators. We evaluate the impact of features based on textual analysis of the comment itself, but also features based on a comparison between the infoPOEM and the comment, as well as features relying on external domainspecific resources. Our methodology consists of (1) circumscribing the data and developing a gold standard, (2) defining a set of features that will best describe the data to be categorized, (3) experiment with machine learning approaches for categorization and (4) perform an evaluation using the gold standard.The gold standard was annotated by three medical students with different experience levels. They were asked to read anonymous comments submitted by physicians and indicate if they found them valuable for their knowledge or practice. 1 Each annotator was provided with a list of anonymous comments and their associated in-foPOEM for reference. They could access, if needed, the full text of the infoPOEM if the comment was not clear to them. A preliminary annotation phase was done with 300 randomly selected comments to be annotated by the three annotators (100 each). This phase provided a better understanding of the problem to validate the annotation schema used for the main annotation task. The classification schema included three choices to annotate the helpfulness of a comment: "valuable", "non-valuable" or "I don't know". The annotators were asked to consider each comment independently and not let the reading of previous comments influence their choice.The main annotation task was based on two batches of comments. A first one, relatively small, contained 250 comments and was given to all three reviewers and allowed us to calculate an interannotator agreement. A larger set of comments was split in three parts to have each comment annotated by a single reviewer. This provided a total of 3,470 comments associated with 327 randomly picked infoPOEMs. Of these comments, 1,586 (45.6%) were deemed valuable and 1,884 (54.3%) non-valuable. A dozen comments were tagged "I don't know" and removed from the dataset.The 300 comments from the preliminary annotation step joined with the 250 comments for the inter-annotator agreement were used as the development dataset (550 unique comments) to define, develop, test and refine features presented in the next section but were not used in the dataset for the final evaluation. The other set of 3,470 comments was used as the evaluation dataset for performance assessment.The size of the manually annotated dataset compares advantageously to the 1000 annotated comments of Ghose and Ipeirotis (2007) and the 267 of Xiong and Litman (2011) . Using the first 250 comments annotated by the three annota-tors, an inter-annotator agreement of 0.4846 was computed using the Fleiss' Kappa method for multiple annotators with all three classes (valuable / non-valuable / i don't know). The interannotator agreement was recalculated using only the 247 comments with only the two main classes (valuable/non-valuable) which provided a score of 0.5004. The remaining data shows a stronger agreement on valuable comments than on nonvaluable ones. The level of agreement calculated on this dataset is considered moderate according to Landis and Koch (1977) when compared to pure chance agreement and is of the same order as in Xiong and Litman (2011) . Using each annotator as the gold standard versus others, the f-measures were 0.806 between annotators 1 and 2, 0.783 between 1 and 3 and 0.792 between 2 and 3.The reason behind the average ratings for interannotator agreement score can be explained by one or many of the following points: coding instructions were interpreted differently by each annotator, coding decision is based on factors which are not present in textual data (like relevant prior knowledge, expertise domain or interest, personal taste or bias and so on), decision factors were present in the text but not correctly understood by the readers, etc. While it is difficult to provide a clear and proven diagnosis of the reason behind these scores, lower scores usually increase the difficulty to develop prediction systems. As such, the average agreement provides a contextualisation of potential performance for this task; a near-perfect classification of comments is not the goal as it would overfit the three annotator's classification.The purpose of defining features is to capture as well as possible the characteristics of comments which would be representative of their helpfulness character. Inspired by previous research, we define a set of base features, focusing on standard text analysis techniques. But we apply these techniques not only to the comment's content itself, but also in a comparative setting looking at similarities between an infoPOEM and its comments. We present these base features first. Second, we look at metadata features from the infoPOEM itself. Third, we use the actual IAM questionnaire as a source of features. Fourth, inspired by our specific problem being in the medical domain, we define a set of features using a medical resource, the UMLS (Unified Medical Language System). The feature extraction process was developed using GATE (Cunningham et al., 2011) with part-ofspeech TreeTagger (Schmid, 1994) tool.The base set includes all features extracted using natural language processing techniques. It includes features and their representations used in previous researches like Kim and Pantel (2006; Xiong and Litman (2011) as well as new ones introduced in this article. They can be regrouped in the structural, syntactic and semantic subsets.Structural Structural features target statistical properties of tokens contained in the comments. The total number of each one was added as separate features. Two features were also added for tokens: the standard deviation and a three-value discretization of the standard deviation to account for the length being within range of the average (avg) number of tokens of all comments, above (high ) or under (low) it, using ±1σ as the threshold.Syntactic Following a part-of-speech tagging (attributing a syntactic role to each word), the number of stop words and content words were added as features, which summed up to the number of tokens from the structural feature. The standard variation and its discretization (as seen previously) were also added. The first and second person pronouns (ex: I, we, us, etc) were added as total count and binary occurrence (true if any occurrence are observed, false if none) features to the dataset to identify author related comments like accounts of personnal experiences, thoughts, preferences or opinions.Then for each type of content words (verb, adverb, noun, adjective) found both in the comment and the corresponding infoPOEM, we added four similarity-based features. They were the total count of similar occurrences, the binary occurrence, the ratio between the total count and the total number of content words and finally the ratio between the total count and the total number of words.Semantic To identify comments with strong opinions or impressions, we use specific verbs (e.g. admit, enjoy, deem, endorse, decline, concern, advise, ...) and match the infinitive form of these verbs in the comments following a partof-speech tagging step. Negative indicators (not, never, neither, nor, can't, don't, etc) are also annotated to target potentially critical comments. As the comments were on infoPOEMs within a scientific discipline, terminology related to the scientific method (observation, qualitative, inference, ...), the statistical domain (population, marginal variable, match sample, ...) and to measurement (unit, cm, m, mg, ug, kg, ml, ...) were added separately as features. Finally, the five standard section's labels (title, clinical question, bottom line, study design, synopsis) from the infoPOEM were added as keywords to detect if a text was commenting on the specific section of the infoPOEM.The number of instances and the binary occurrence for each of these semantic concepts (opinion verbs, domain terminology, negative indicators and localisation indicators) were added as features.To each infoPOEM is associated a code called the level of evidence (LOE). This code describes the type of research protocol used in therapy, diagnosis or prognosis research using one letter and one number (1a, 1b, 1c, ..., 2a, 2b, ...) . A minus sign can be added at the end of the code to denote researches that cannot provide conclusive answers in cases where the confidence interval is too large or the heterogeneity of the population's sample used is problematic. We use this code and split it in 3 parts to provide 3 features: the type (first character, from 1 to 5), the subtype (second character, from A to C) and the presence of the minus indicator.Each question from the IAM questionnaire was added as a feature. Most of the questions asked for a logical yes/no answer. A few questions accepted either yes, no or "possibly" as an answer. Only one question pertaining to the relevance of the information regarding the physician's patients, asked for an answer using three levels: "totally relevant", "partially relevant" or "not relevant". The possibility to answer some specific questions was also dependant on the answer on previous questions; i.e. questions #3 and #4 were only available if the totally or partially relevance was chosen at question #2. Regardless of this factor, all questions were added as individual and stand-alone features in the dataset.Unlike the work with Amazon data which relies on official product feature sources to find vocabulary representative of different products, we do not have access to such sources in this study. Instead, we extracted single words and multiword expressions from the Unified Medical Language System, a large medical ontology hosted at the National Library of Medicine (http://umlsks.nlm. nih.gov/) to analyse the domain specific nature of the reviews and infoPOEMs. The relevant part of this resource splits biomedical and related concepts into 13 groups and 94 types using themes like genes and molecular sequences, anatomy, living beings, physiology, procedures, disorders, organizations and so on, with each type related to one group.For each type and group, the number of occurrences, the binary occurrence and the similarity occurrences were added as features. The similarity occurrence indicates how many expressions found in a comment were also found in the in-foPOEM related to that comment. This type of feature was added to verify if an author was talking about domain-specific concepts from the in-foPOEM. Because of the relation between groups and types, each matching expression was both represented with a type feature and its corresponding group feature. In addition, the global binary and total occurrence of UMLS expressions were added as two features to logically regroup all UMLS type and group features. Therefore, if a word was tagged as being part of 4 types and 3 groups, the global binary occurrence would be 1 and the global number would be 7.5 Performance evaluation
2
Given the fact that including more data in a reading comprehension system is important for gen-eralization (Chung et al., 2018; Talmor and Berant, 2019) , and given that our created dataset has the SBRCS which are missed in previous datasets, we propose a two-steps method to generate skillrelated questions from a given story: HTA followed by WTA. HTA teaches the model the typical format for comprehension questions using large previously released datasets. We use two well-known datasets, SQuAD (Rajpurkar et al., 2016) and Cos-mosQA (Huang et al., 2019) . In Appendix A.3, we add more details on both of these datasets. These previous datasets are not annotated with the question types outlined in Section 3.1, so the HTA phase allows us to take advantage of those datasets. WTA guides the model to generate questions to test the specific comprehension skills enumerated in Section 3.1. Thus, in HTA, we train (fine-tune) a model on large QG datasets, and then, we further train the model to teach the model what to ask (WTA). For the generation model, we use the pre-trained Text-to-Text Transfer Transformer T5 (Raffel et al., 2020) , which closely follows the encoder-decoder architecture of the transformer model (Vaswani et al., 2017) . T5 is a SOTA model on multiple tasks, including QA.Previous works showed that incorporating more data when training a reading comprehension model improves performance and generalizability (Chung et al., 2018; Talmor and Berant, 2019) . However, we cannot incorporate previously released datasets with our new one, as they do not include compatible question skills information. However, they do contain many well-formed and topical questions. Thus, we train a T5 model on SQuAD and CosmosQA datasets to teach the model how to ask questions. Previous neural question generation models take the passage as input, along with the answer. How- ever, encoders can pass all of the information in the input to the decoder, occasionally causing the generated question to contain the target answer. Since the majority of the questions in our created dataset are inferential questions, the answers are not explicitly given in the passages (unlike extractive datasets). Thus, we feed the stories to the encoder, but withhold the answers. Unlike previous systems, we then train the model to generate the questions and answers. We propose this setting to generate fewer literal questions. During our experiments, we evaluated the effect of excluding the answers, and we found them useful to the system.In Figure 1 we show the input-output format of the model. The encoder input is structured as <STORY_TEXT> </s>, where </s> is the end-of-sentence token. The decoder generates multiple questionanswer pairs as <QUESTION_TOKENS>1 <as> <ANSWER_TOKENS>1 <sp> ... <QUESTION_TOKENS>n <as> <ANSWER_TOKENS>n </s>, where <as> separates a question from its answer, and <sp> separates a question-answer pair from another. The model can generate more than one question-answer pair. We prepare the data to include all of a passage's question-answer pairs in the decoder. Some passages include single question-answer pair, and some passages have up to fifteen pairs.QG models take a passage/story as input and generate a question. The type of generated question is not controlled and is left for the system to decide it. Thus, the generated question is usually an undesired question. Thus, in order to control the style of the generated question, the system needs an indication about the skill that the system is expected to generate a question for. proposed a way to control the style of the generated questions (e.g. what, how, etc.). The authors built a rulebased information extractor to sample meaningful inputs from a given text, and then learn a joint distribution of <answer, clue, question style> before asking the GPT2 model (Radford et al., 2019) to generate questions. However, this distribution can only be learned using an extractive dataset (e.g. SQuAD); the model cannot learn to generate inferential questions.To control the skill of the generated question, we use a specific prompt per skill, by defining a special token <SKILL_NAME> corresponding to the desired target skill, using the collected dataset. This helps us to control what to extract from the pretrained model. Thus, the encoder takes as input <SKILL_NAME> and <STORY_TEXT>, where <SKILL_NAME> indicates to the model for which skill the question should be generated (see Figure 2 ). The data format in the decoder is similar to the one in the HTA step, but here the model generates a single question-answer pair. As a result, the encoding of the <STORY_TEXT> will be based on the given <SKILL_NAME>. In this way, the model encodes the same story in a different representation when a different <SKILL_NAME> is given. A similar technique was used in the literature to include persona profiles in dialogue agents to produce more coherent and meaningful conversations .Figurative language is common in stories as it makes ideas and concepts easier to visualize by the reader. Also, it is an effective way of conveying an idea that is not easily understood. With this skill, we examine the reader ability of recognizing the implicated meaning of a sentence or a type of figurative language.
2
Our dataset is based on the Twitter corpus used in Waseem and Hovy (2016) , which contained 136,052 English Tweets, identified by searching for common racial, religious and sexist slurs and terms, as well as hashtags known to trigger hate speech over a 2 month period. With the help of an outside annotator, they coded 16,914 Tweets as either racist (1,972 Tweets by 9 users), sexist (3,383 Tweets by 613 users) or neither racist nor sexist (11,559). Using 'twitteR' package (Gentry, 2016), we downloaded the Tweets based on the Twitter IDs; however, at the time of download only 2,818 Tweets were still available, presumably because the relevant posts had been deleted. Of these Tweets, 628 had been coded as sexist and 858 as racist. Our analysis focuses on these 1,486 Tweets.In general, research using MDA has been based on a feature set which has grown over time and which has changed depending on the variety and the language under analysis. There are, however, a core set of features related to basic partsof-speech and grammatical constructions (Biber, 1988 ), which we have included in our analysis. These features include tense and aspect markers, place and time adverbials, personal pronouns, questions, nominal forms, passives, subordination, complementation, adjectives and adverbs, modals, specialised verb classes, coordination, negation and other lexical classes, such as amplifiers, downtoners and conjunctions. In addition, as is generally the case in MDA studies (e.g. Grieve et al., 2010) , we included additional features to refine our analysis for this particular variety of language, including hashtags, URLs, capitalisation, imperatives, comparatives, and superlatives. We then tagged our corpus for each of the 86 linguistic features. This was achieved by first tagging the Tweets for basic part-of-speech information using the Gimpel et al. (2011) Twitter Tagger. Based on the tagged corpus, we then automatically identified occurrences of our 86 features in the corpus by looking for specific tags, words, and sequences of tags and words, taking into account various exceptional forms found in this corpus.Rather than measure the relative frequency of these forms across the texts in the corpus, we simply considered whether or not each of these features occurred in each of the texts, retaining the 81 features that occurred in at least 1% of the Tweets in our corpus. We then subjected this 81 feature by 1,486 text binary data matrix to a multiple correspondence analysis (MCA) in R using FactoMineR (Husson et al., 2017) . MCA is essentially a dimension reduction method, which aims to represent high dimensional categorical data in low dimensional space, similar to factor analysis as used in traditional MDA for continuous data. MCA is predominantly used to analyse data from questionnaires and surveys (Husson et al., 2010) , but it has also been used in linguistics, most notably in lexical semantics (e.g. Tummers et al., 2012; Glynn, 2009 Glynn, , 2014 .The MCA returns a positive or negative coordinate for each linguistic feature on each dimension as well as a value indicating the variables contribution to that dimension (Le Roux and Rouanet, 2010). If the variables' coordinates are of similar value, then this indicates that these variables often co-occur in Tweets. The MCA also assigns a positive or negative coordinate to each Tweet on each dimension, which can then be plotted to visualize the relationship between the Tweets on each dimension. Tweets with similar coordinates on a dimension will share linguistic features. Each dimension was interpreted by considering the functional properties shared by the linguistic features with the strongest contributions. Following Le 2, Articles (1.9), Quantifiers (1.9), Attributive adjectives (1.6), Synthetic negation (1.5), Predicative adjectives (1.2), Contrastive conjunctions (1.2), absence of Other pronouns (1.1), Nominalisations (1.1), Prepositions (1), Numerals (.9), absence of 2nd person pronouns (.9), absence of Accusative case (0.9), Perfect aspect (.7), Determiners (.7), absence of Question marks (.7) 3 + Question DO (9), Question marks (6.8), 2nd person pronouns (6.8), absence of Subject pronouns (4.4), Initial DO (3.7), Initial verbs (3.2), Determiners (3), Nominalisation (2), Synthetic negation (2), Possessive pronouns (1.9), absence of 1st person pronouns (1.8), Other pronouns (1.7), absence of Nominative case (1.1), absence of Third person pronoun (1), Pro-verb DO (.9), Emoticons (.8), Existentials (.8), BE as main verb (.7) -Subject pronouns (8.7), 1st person pronouns (6.2), Auxiliary BE (3.2), 3rd person pronouns (2.8), Object pronouns (2.5), absence of 2nd person pronouns (1.9), Progressive aspect (1.8), absence of Determiners (1.7), Verbs of perception (1.6), Nominative case (1.3), absence of Mentioning (1.2), absence of Question marks (1.2), absence of Other pronouns (.9), Passives (.8) 4 + Predicative adjectives (4.5), Existentials (4.4), absence of Prepositions (3.7), absence of Proper nouns (3.5), BE as main verb (3.4), Place adverbials (3), Emoticons (2.5), absence of Nouns (2.3), Synthetic negation (2.3), absence of Capitalisation (2), Subject pronouns (1.9), 1st person pronouns (1.9), absence of Past tense (1.4), Interjections (1.3), absence of Auxiliary BE (1.2), Comparatives (1.1), absence of Articles (1), Requests (.9), absence of URLs (.8), Nominative case (.8) -Auxiliary BE (7.3), Progressive aspect (4.6), Hashtags (3.9), Capitalisations (3.2), By-passives (3.3), URLs (3.1), Proper nouns (2.8), Public verbs (2.1), absence of BE as main verb (1.8), Past tense (1.5), Numerals (1.5), Question DO (1.3), Passives (1), Prepositions (1), Perfect aspect (1), absence of Subject pronouns (1), Articles (0.8), absence of Nominative case (0.7), absence of Predicative adjectives (0.7), Infinitives (0.7)Roux and Rouanet (2010), we interpreted each dimension by considering all features with a contribution that exceeds 0.62, the average contribution of a feature on a dimension (100/162). In addition, the Tweets with the highest positive and negative coordinates on each dimension were subjected to a micro-analysis to confirm and refine these functional interpretations. Finally, the racist and sexist Tweets were compared on each dimension using Wilcoxon signed-ranked tests to see if there were any functional differences in these two forms of abusive language.
2
In this study we examined and analyzed various models and data augmentation strategies for sarcasm detection. First, we go through data augmentation methods; then, we discuss the structure and hyperparameters of these models in this section. The codes of all models are available on GitHub 1 .For this augmentation method, we used GPT-2 (Radford et al., 2019) generative model to generate 4000 tweets for both sarcastic and non-sarcastic classes. Then we selected 2000 tweets of each class randomly to increase dataset quantity and have more sarcastic samples.We used three distinct ways to change the data in this method: eliminating, replacing with synonyms, and shuffling. These processes were used in the following order: shuffling, deleting, and replacing. The removal and replacement were carried out systematically. We used the words' roots to create a synonym dictionary. Synonym dictionary is created by scarping the Thesaurus website 2 . When a term was chosen to be swapped with its synonyms, we chose one of the synonyms randomly ( Figure 1 ). We tried each combination of these processes to find the best data augmentation combination (a total of seven).We utilized SVM to discover the optimal approaches for dataset preprocessing and word embeddings. For data augmentation, we employed both generator-based and mutation-based methods. We also put other data preprocessing approaches to the test, such as link removal, emoji removal, stop word removal, stemming, and lemmatizing. We utilized TF-IDF, Word2Vec (Mikolov et al., 2013) , and BERT (Devlin et al., 2018) for word embedding. We found that using a regularization value of 10 and a Radial Basis Function (RBF) kernel, BERT word embedding, and no data preprocessing will give us the best results.We begin with the intuition that a memory model can help us reach a better result. So we started with Long Short Term Memory (LSTM) model (Hochreiter and Schmidhuber, 1997) . We used one LSTM layer followed by time distributed dense layer. We repeated these two layers one more time, and then we used another LSTM layer followed by two dense layers. This model and all of the following models in this section were trained in 10 epochs.In addition, we used Bidirectional Long Short Term Memory (BLSTM). Using bidirectional will run the inputs in two directions, one from past to future and the other from future to past. We used one BLSTM layer for this network, followed by a time-distributed dense layer. We repeated these two layers one more time, and then we used another BLSTM layer followed by two dense layers.Furthermore, we combined LSTM and BLSTM with Convolutional Neural Networks (CNNs). CNN layers for feature extraction on input data are paired with LSTM to facilitate sequence prediction in the CNN-LSTM architecture. Although this model is often employed for video datasets, (Rehman et al., 2019) demonstrated that it could perform better in sentiment analysis tasks. We used three 1D convolutional layers followed by a 1D global max-pooling layer for the convolutional part. We used these layers at the end of LSTM-based networks.The use of bidirectional training of transformer and a prominent attention mode for language modeling is BERT's fundamental technological breakthrough (Devlin et al., 2018) . The researchers describe a new Masked Language Model (MLM) approach that permits bidirectional training in previously tricky models. They found that bidirectionally trained language models can have a better understanding of language context and flow than unidirectional ones.Robustly Optimized BERT or RoBERTa has a nearly identical architecture to BERT, however, the researchers made some minor adjustments to its architecture and training technique to enhance the results on BERT architecture .We used both RoBERTa with twitter-robertabase, which has been trained on near 58 million tweets and finetuned for sentiment analysis with the TweetEval benchmark and BERT with bert-base from Huggingface (Wolf et al., 2019) . For both models, we employed five epochs, batch size of 32, 500 warmup steps, and a weight decay of 0.01.One of the most important achievements in deep learning research in the recent decade is the attention mechanism (Vaswani et al., 2017) . The Encoder-Decoder model's restriction of encoding the input sequence to one fixed-length vector to decode each output time step is addressed via an attention mechanism. This difficulty is thought to be more prevalent when decoding extended sequences.We start with the assumption that if a model with an attention layer is trained to identify sarcasm at the sentence level, the sarcastic words will be the ones the attention layer learns to value. As a result, we added an attention layer to our LSTM-based and BERT-based models. The results will be discussed further.Google's T5 (Raffel et al., 2019) text-to-text model outperformed the human baseline on the GLUE, SQuAD, and CNN/Daily Mail datasets and earned a remarkable 88.9 on the SuperGLUE language benchmark.We fine-tuned T5 for our problem and dataset by giving the sarcastic label the target and the tweets as the source. We used two epochs, batch size of 4, Figure 2 : Fine-tuning T5 model for sarcasm detection problem. 512 tokenization max length, Adam epsilon of 1e-8, word decay of 0, no warmup steps, and learning rate of 3e-4 ( Figure 2 ) 3 .
2
We first introduce the strategies to incorporate multiple states and the imitation learning method for generating approximations of future states. Then we introduce the merging gate to adaptively fuse past and future states. At last, we show the training process and the exit condition during inference.Existing work (Xin et al., 2020) focuses on making exit decision based on a single branch classifier. The consequent unreliable result motivates the recent advance ) that uses consecutive states to improve the accuracy and robustness. However, the model prediction is still limited to use several local states. In contrast, we investigate how to incorporate all the past states from a global perspective. The existing strategy using consecutive consistent prediction labels can be easily extended to a global version that counts the majority of the predicted labels which is regarded as a voting strategy. Another alternative is the commonly-used ensemble strategy that averages the output probabilities for prediction. Besides these naive solutions, we explore the following strategies to integrate multiple states into a single one:• Max-Pooling: The max-pooling operation is performed on all available states, resulting in the integrated state.• Avg-Pooling: The average-pooling operation is performed on all available states, resulting in the integrated state.• Attn-Pooling: The attentive-pooling takes the weighted summation of all available states as the integrated state. The attention weights are computed with the last state as the query.• Concatenation: All available states are concatenated and then fed into a linear transformation layer to obtain the compressed state.• Sequential Neural Network: All available states are sequentially fed into an LSTM and the hidden output of the last time-step is regarded as the integrated state.Formally, the state of the i-th layer is denoted as s i . When forward propagation proceeds to the i-th intermediate layer, all the past states s 1:i are incorporated into a global past state s p :EQUATIONwhere G(•) refers to one of the state incorporation strategies.Existing work for early exit stops inference at an intermediate layer and ignores the underlying valuable features captured by the future layers. Such treatment is partly rationalized by the recent claim (Kaya et al., 2019) that shallow layers are adequate to make a correct prediction. However, Jawahar et al. (2019) reveal that the pre-trained language models capture a hierarchy of linguistic information from the lower to the upper layers, e.g., the lower layers learn the surface or syntactic features while the upper layers capture high-level information like the semantic features. We hypothesize that some instances not only rely on syntactic features but also require semantic features. It is actually undesirable to only consider features captured by shallow layers. Therefore, we propose to take advantage of both past and future states. Normally, we can directly fetch the past states, while using future information is intractable how since the future states are inaccessible before passing through the future layers. To bridge this gap, we propose a simple method to approximate the future states in light of imitation learning (Ross et al., 2011; Nguyen, 2016; Ho and Ermon, 2016) . We couple each layer with an imitation learner. During training, the imitation learner is encouraged to mimic the representation of the real state of that layer. Through this layer-wise imitation, we can obtain approximations of the future states with minimum cost. The illustration of the future imitation learning during inference is shown in Figure 2 . To be precise, we intend to obtain a state approximation of the j-th layer if the forward pass exits at the intermediate i-th layer for any j > i. During training, we pass through the entire n-layer model but we simulate the situation that the forward pass ends up at the i-th layer for any i < n. The j-th learner corresponding to the j-th layer takes s i as input and outputs an approximationŝ i j of the real state s j . Then s j serves as a teacher to guide the jth imitation learner. We adopt cosine similarity as the distance measurement and penalize the discrepancy between the real state s j and the learned statê s i j . Let L i cos denotes the imitation loss of the situation that the forward pass exits at the i-th layer, it is computed as the average of the similarity loss for any j > i. Since the exit layer i can be any number between 2 to n during inference, we go through all possible number i and average the corresponding L i cos , resulting the overall loss L cos :EQUATIONL i cos = 1 n − i n j=i+1 l i,j cos (s j ,ŝ i j ) (4) L cos = 1 n − 1 n i=2 L i cos (5)where • denotes the L 2 norm. Learner j (•) is a simple feed-forward layer with learnable parameters W i and b i .During training, the forward propagation is computed on all layers and all imitation learners are encouraged to generate representations close to the real states. During inference, the forward propagation proceeds to the i-th intermediate layer and the subsequent imitation learners take the i-th real state as input to generate the approximations of future states. Then the approximations are incorporated into a comprehensive future state s f with one of the global strategies introduced before:EQUATIONwhereŝ i i+1:n denotes the approximations of the states from the (i+1)-th layer to the n-th layer.We then explore how to adaptively merge the past information and future information. Intuitively, the past state s p and the future state s f are of different importance since the authentic past states are more reliable than our imitated future states. In addition, different instances depend differently on high-level features learned by future layers. Therefore, it is indispensable to develop an adaptive method to automatically combine the past state s p and the future state s f . In our work, we design an adaptive merging gate to automatically fuse the past state s p and the future state s f . As the forward propagation proceeds to the i-th layer, we compute the reliability of the past state s p , and the final merged representation is a trade-off between these two states:α = sigmoid(FFN(s p )) (7) z i = αs p + (1 − α)s f (8)where z i is the merged final state and FFN(•) is a linear feed forward layer of the merging gate. During training, each layer can generate the approximated states of future and obtain a merged final state which is used for prediction. Then the model will be updated with the layer-wise crossentropy loss against the ground-truth label y. The merging gate adaptively learns to adjust the balance under the supervision signal given by ground-truth labels. However, with the layer-wise optimization objectives, the shallow layers will be updated more frequently since they receive more updating signals from higher layers. To address this issue, we heuristically re-weight the cross entropy loss of each layer depending on its depth i and get its weight w i . The updating procedure is formalized as:EQUATIONEQUATIONEQUATIONThe overall loss is computed as follows:EQUATION3.4 Fine-tuning and InferenceHere we introduce the fine-tuning technique and the exit condition at the inference stage.Fine-tuning The representations learned by shallow layers have a big impact on performance in the early exit framework since the prediction largely depends on the states of shallow layers. Most existing work updates all of the model layers at each step during fine-tuning to adapt to the data of downstream tasks. However, we argue that such an aggressive updating strategy may undermine the well-generalized features learned in the pretraining stage. In our work, we try to balance the requirements of maintaining features learned in pre-training and adapting to data at the fine-tuning stage. Specifically, the parameters of a layer will be frozen with a probability p and the probability p linearly decreases from the first layer to the L-th layer in a range of 1 to 0.Inference Following Xin et al. (2020), we quantify the prediction confidence e with the entropy of the output distribution p i of i-th layer:EQUATIONThe inference stops once the confidence e(p i ) is lower than a predefined threshold τ . The hyperparameter τ is adjusted according to the required speed-up ratios. If the exit condition is never reached, our model degrades into the common case of inference that the complete forward propagation is accomplished.
2
To test benchmark independence and models' robustness for LSC, we design a set of experiments using two source corpora, a common benchmark, and a common architecture for LSC detection.The first corpus is the "L'Unità" corpus (Basile et al., 2020a) . It covers a time span between 1945-2014 and it has been collected, pre-processed, and released for the DIACR-Ita (Diachronic Lexical Semantics in Italian) task (Basile et al., 2020b) , a LSC change shared task for Italian. Texts were extracted from PDF files by using the Apache Tika library 1 and pre-processed with spaCy 2 for tokenization, PoS-tagging, lemmatization, named entity recognition and dependency parsing. The second corpus was obtained by crawling a publicly available digital archive of the Italian newspaper "La Stampa". The corpus covers a shorter time period and it was pre-processed using the same tools and pipeline of "L'Unità". Each corpus is split into two sub-corpora, C 1 and C 2 , covering different time periods. Table 1 summarises the basic statistics of corpora and the time periods of each sub-corpus.Subcorpus Tokens L'Unità C 1 [1945 -1970] [1990 -2005] 1,193,959,080 The corpora present two major differences. First, as shown in Table 1 , the number of tokens in "La Stampa" is consistently larger than "L'Unità". Second, the political and social orientations of the two newspapers are different. Historically, "L'Unità" has been the official newspaper of the Italian Communist Party and of its successors PDS/DS. "La Stampa" is the oldest newspaper in Italy, traditionally it has voiced centrist and liberal positions.The only benchmark for Italian has been proposed in the context of DIACR-Ita. The dataset contains 18 target lemmas, 6 of which are instances of a LSC. The dataset was manually created using the "L'Unità" corpus, where a valid LSC corresponds to the acquisition of a new meaning by a target word in C 2 .As architecture for automatic LSC detection, we obtain comparable diachronic representations of word meanings by re-implementing the Word2Vec Skipgram model (Mikolov et al., 2013) with Orthogonal Procrustes (OP-SGNS) (Hamilton et al., 2016b). In particular, we adopted the implementation proposed by Kaiser et al. (2020) , a stateof-the-art system that ranked 1 st both at DIACR-Ita and at SemEval 2020 Task 1: Unsupervised Lexical Semantic Change Detection . Model parameters are reported in Appendix A. Word embeddings were generated using lemmas to reduce sparseness and facilitate the evaluation against the benchmark.
2
We describe how to construct and enrich a personalized language model in this section. In the first subsection, we propose a social-driven, personalized mixture language model. The original, poorly estimated user language model is enriched with a set of relevant document language models. In Section 2.2, a graphical model is presented to identify the mixture weights of each mixture component. The relative importance of each mixture component, i.e. document language model, is determined with the use of prior knowledge that comes from a social network. In Section 2.3, we describe how the model is optimized under a lack of labelled information.The language model of a collection of documents can be estimated by normalizing the counts of words in the entire collection (Zhai, 2008) . To build a user language model, one naïve way is first to normalize word frequency , within each document, then average over all the documents in a user's document collection. The resulting unigram user language model is:| | ∑ , | | ∈ | | ∑ ∈ (1)where is the language model of a particular document, is the user's document collection, and |•| denotes the number of elements in a set. This formulation is basically an equal-weighted finite mixture model.A simple yet effective way to smooth a language model is to linearly interpolate with a background language model (Chen & Goodman, 1996; Zhai & Lafferty, 2001 ). In the linear interpolation method, all background documents are treated equally. The entire document collection is added to the user language model with the same interpolation coefficient. On social media, however, articles are often short and noisy. The user language models generated in this way are prone to overfitting. To obtain a better personalized user language model, we must take into consideration the complicated document-level correlations and dissimilarities, and this is where our idea was born.Our main idea is to specify a set of relevant documents for the target user and enrich the user language model with these documents. Then, through the use of the information embedded in a social network, the relative importance of these documents is learnt. Suppose that the target user is u. Letting denote the content posted by people that are most relevant to u (e.g. friends on a social network), our idea can be concisely expressed as:∑ ∈ (2)where is the mixture weight of the language model of document d, and ∑ 1.Documents posted by irrelevant users are ignored as we believe the user language model can be personalized better by exploiting the social relationship in a more structured way. In our experiments, we choose the documents posted by friends as .Also note that we have made no assumption about how the "base" user language model is built. In practice, it need not be models following maximum likelihood estimation, but any language model can be integrated into our framework to achieve a more refined model. Furthermore, any smoothing method can be applied to the language model without degrading the effectiveness.Now, we discuss how the mixture weights can be estimated. We introduce a factor graph model to make use of the diverse information on a social network. Factor graph (Kschischang et al., 2006) is a bipartite graph consisting of a set of random variables and a set of factors that signifies the relationships among the variables. It is best suited to situations where the data is clearly of a relational nature (Wang et al., 2012) . The joint distribution of the variables is factored according to the graph structure. Using a factor graph model, we can incorporate the knowledge into the potential function for optimization and perform joint inference over documents.A factor graph model is presented in Figure 1 . As can be seen from Equation 2, there are | | unknown mixture weights to be estimated. For each mixture weight , we put a Bernoulli random variable in the factor graph. The value 1 means that the document d should be included in the enriched personalized language model of the target user. In this sense, a larger value of 1 implies a higher mixture weight of d. In particular, we set to be proportional to 1 in the final estimation. ( 3)where 〈 〉 is a vector of predefined feature functions and α is the parameter vector to be learnt. We assume that all feature functions take a value of zero if 0. So, the larger the value of 1 is, the higher is the value 1 relative to 0 .In other words, is (locally) believed to be higher.In our experiment, we define the vector of feature functions as 〈 , , , Similarity function f sim . The similarity between language models of the target user and a document should play an important role. We use cosine similarity between two unigram models in our experiments. Document quality function f oov . The out-of-vocabulary (OOV) ratio is used to measure the quality of a document. It is defined as:EQUATIONwhere V is the vocabulary set of the entire corpus, with stop words excluded. EQUATIONwhere g is a vector of feature functions indicating whether two variables are correlated. We assume that the two variables are not connected in the factor graph if g(y i , y j ) = 0.If we further denote the set of all variables linked to as , then, for any variable y d , we obtain the following result:EQUATIONwhich is a function of only. This expression will be used in the following equations.We define the vector of feature functions 〈 , 〉 as follows. User relationship function g rel . We assume that two variables y i and y j are higher correlated if The similarity between documents , is measured by the cosine similarity between two unigram language models. Co-category function g cat . For any two variables y i and y j , it is intuitive that the two variables The flexibility of the proposed framework lies in the following aspects. The factor graph model is adaptable. The feature functions are not restricted to the ones we have used and can be freely added or redesigned in order to properly model different datasets. The set of relevant documents can be changed. In our experiment, we used the documents posted by friends to enrich the language model. Nevertheless, this is not a requirement. Whenever appropriate, documents posted by friends of friends, or any arbitrary set of documents can be adapted to tackle this problem. As we mentioned at the end of Section 2.1, the "base" user language model can be already smoothed by any technique. Furthermore, the language models need not be unigram models. If a higher order n-gram model is more suitable, it can be used in our framework. For our particular dataset, however, we find that it gives no advantage to use higher order n-gram models.Let Y be the set of all random variables. The joint distribution encoded by the factor graph model is given by multiplying all potential functions:EQUATIONwhere Z is a normalization term to ensure that the probability sums to one.The desired marginal distribution can be obtained by marginalizing all other variables. Under most circumstances, however, the factor graph is densely connected. This makes the exact inference intractable, and approximate inference is required. After obtaining the marginal probabilities with the approximate inference algorithm, the mixture weights in Eq. 2 are estimated by normalizing the corresponding marginal probabilities to satisfy the constraint ∑ 1. The normalization can be written as:EQUATIONIt can be verified that the above equation leads to a valid probability distribution for our mixture model.The proposed factor graph model has | | | | parameters, where | | means the dimensionality of the vector . Combining Equation 2 and Equation 10, it can be observed that the total number of parameters in the mixture model is reduced from 1 | | to1 | | | |, lowering the risk of overfitting.A factor graph is often optimized by gradient-based methods. Unfortunately, since the ground truth values of the mixture weights are not available, we are prohibited from using these approaches. Here, we propose a two-step iterative procedure to optimize our model with respect to the model perplexity on held-out data.At first, all of the model parameters (i.e. , , ) are initialized randomly. Then, we infer the marginal probabilities of the random variables. Given these marginal probabilities, we can evaluate the perplexity of the user language model on a held-out dataset and search for better parameters. This procedure is repeated until convergence. We have also tried to train the model by optimizing the accuracy of the authorship attribution task. Nevertheless, we find that models trained by optimizing the perplexity give better performance.
2
The systems that were constructed by this team included two component models: a boosting model and a maximum entropy model as well as a combination system. The component models were also used in other Senseval-3 tasks: Semantic Role Labeling (Ngai et al., 2004) and the lexical sample tasks for Chinese and English, as well as the Multilingual task (Carpuat et al., 2004) .To perform parameter tuning for the two component models, 20% of the samples from the training set were held out into a validation set. Since we did not expect the senses of different words to share any information, the training data was partitioned by the ambiguous word in question. A model was then trained for each ambiguous word type. In total, we had 40 models for Basque, 27 models for Catalan, 45 models for Italian and 39 models for Romanian.Boosting is a powerful machine learning algorithm which has been shown to achieve good results on a variety of NLP problems. One known property of boosting is its ability to handle large numbers of features. For this reason, we felt that it would be well suited to the WSD task, which is known to be highly lexicalized with a large number of possible word types.Our system was constructed around the Boostexter software (Schapire and Singer, 2000) , which implements boosting on top of decision stumps (deci-sion trees of one level), and was originally designed for text classification. Tuning a boosting system mainly lies in modifying the number of iterations, or the number of base models it would learn. Larger number of iterations contribute to the boosting model's power. However, they also make it more prone to overfitting and increase the training time. The latter, a simple disadvantage in another problem, becomes a real issue for Senseval, since large numbers of models (one for each word type) need to be trained in a short period of time.Since the available features differed from language to language, the optimal number of iterations also varied. Table 1 shows the performance of the model on the validation set with respect to the number of iterations per language. Table 4 3.2 Maximum Entropy The other individual system was based on the maximum entropy model, another machine learning algorithm which has been successfully applied to many NLP problems. Our system was implemented on top of the YASMET package (Och, 2002) .Due to lack of time, we did not manage to finetune the maximum entropy model. The YASMET package does provide a number of easily variable parameters, but we were only able to try varying the feature selection count threshold and the smoothing parameter, and only on the Basque data.Experimentally, however, smoothing did not seem to make a difference. The only change in performance was caused by varying the feature selection count threshold, which controls the number of times a feature has to be seen in the training set in order to be considered. Table 2 shows the performances of the system on the Basque validation set, with count thresholds of 0, 1 and 2.Since word sense disambiguation is known to be Threshold 0 1 2 Accuracy 55.62% 66.13% 65.68% Table 2 : Maximum Entropy Models on Basque validation set. a highly lexicalized task involving many feature values and sparse data, it is not too surprising that setting a low threshold of 1 proves to be the most effective. The final system kept this threshold, smoothing was not done and the GIS iterations allowed to proceed until it converged on its own. These parameters were used for all four languages. The maximum entropy model was not entered into the competition as an official contestant; however, it did participate in the combined system.Ensemble methods have been widely studied in NLP research, and it is well-known that a set of systems will often combine to produce better results than those achieved by the best individual system alone. The final system contributed by the Swarthmore-Hong Kong team was such an ensemble. In addition to the boosting and maximum entropy models mentioned earlier, three other models were included: a nearest-neighbor clustering model, a decision list, and a Naïve Bayes model. The five models were then combined by a simple weighted majority vote, with an ad-hoc weight of 1.1 given to the boosting and decision lists systems, and 1.0 otherwise, with ties broken arbitrarily.Due to an unfortunate error with the input data of the voting algorithm (Wicentowski et al., 2004) , the official submitted results for the combined system were poorer than they should have been. Table 3 compares the official (submitted) results to the corrected results on the test set. The decrease in performance caused by the error ranged from 0.9% to 3.3%.
2
This study focuses on cross-lingual ED, which aims to transfer knowledge from a source language with abundant labeled data to a target language with insufficient training data. Figure 2 visualizes the overall architecture of our model, which consists of three main components: (1) Monolingual embedding layer, which transforms each token into a continuous vector representation. (2) Context-dependent lexical mapping, which maps each word in the source language to its best-suited translation in the target language, by examining its contextual representation and imposing a selective attention over different translation candidates. (3) Shared syntactic order event detector, which employs a Graph Convolutional Networks (GCNs) to explore syntactic similarity of resources of different languages, in order to achieve multilingual co-training.For the sake of convenience, in the following illustrations, we assume the source language is English and the target language is Chinese, and we use an English sentence s = {w 1 , w 2 , . . . , w n } to illustrate our idea.In the monolingual embedding layer, each word is assigned to a distributed vector as its representation. Specifically, we first train English/Chinese word embeddings on the corresponding Wikipedia dumps via Skip-gram model (Mikolov et al., 2013) with a dimensional size d = 300. And then we transform each token into its word embedding as its vectorized feature representation.In this way, s is transformed into an embedding matrixE s = [x 1 , x 2 , . . . , x n ] T , where x i ∈ R dindicates the word embedding of the token w i .For each token w i in s, context-dependent lexical mapping aims to search for its best-suited word translation according to its contextual representation. This process involves: 1) learning multilingual alignment, 2) retrieving translation candidates, and 3) ranking translation words via a selective attention mechanism.Let X and Y be the English and Chinese embedding spaces. In order to achieve multilingual alignment, we learn a mapping W ∈ R d×d from X to Y via a seed dictionary with a size of m, by optimizing:W * = arg min W ∈M d (R) ||W X dic − Y dic || F (1)where M d (R) is the space of d × d matrices; X dic , Y dic ∈ R(SVD) of Y dic X T dic , i.e., W * = arg min W ∈O d (R) ||W X dic −Y dic || F = U V T (2) where U ΣV T = SVD(Y dic X T dic ).Next, we retrieve translation candidates for each token w i in s. Specifically, we first project w i into the aligned embedding space (i.e., by applying W on x i ), and then we explore its neighborhood to find the nearest Chinese words as its translation candidates. In order to measure the distance Figure 2: The overview architecture of our model. The figure illustrates the process of performing cross-lingual transfer for an English sentence "A man died when a tank fired on the hotel" into Chinese and using the shared syntactic order event detector to predict the event type for the word "fired".of w i and a Chinese word y in the aligned space, we adopt the cross-domain similarity local scaling (CSLS) metric (Lample et al., 2018) :EQUATIONwhere y denotes the (Chinese) word embedding of y; r Y (W x i ) indicates the mean cosine similarity between W x i and its K neighbors in Y , which is defined as:r Y (W x i ) = 1 K y ∈N Y (W x i ) cos(W x i , y )where N Y (W x i ) denotes the neighborhood associate with W x i in Y . In our method, for w i , we take J Chinese words which have the smallest CSLSs as its translation candidates. We denote by T (w i ) the set of translation candidates for w i , where T(w i ) jindicates the jth element of T (w i ) .Finally, for each token w i , we perform a contextaware selective attention mechanism to weigh each translation candidate in T (w i ) and get the best-suited translation for it.Learning Contextual Representation. We employ the self-attention mechanism (Vaswani et al., 2017) to learn context representation ofw i . Specifically, given E s = [x 1 , x 2 , . . . , x n ] T ,we use different single-layer neural networks to learn queries Q, keys K, values V respectively. For example, Q = tanh(E s W m + b m ), where W m ∈ R d×d and b ∈ R d are parameter matrix and bias respectively. Then, we compute a self-attention matrix by computing:EQUATIONwhere d indicates the word embedding dimension. We take c i as the contextual representation of w i . Learning Selective Attention. For each token w i , after obtaining its translation candidates list T (w i ) and contextual representation c i , we impose a selective attention mechanism to automatically weigh each candidate. Specifically, the weight of the jth candidate T(w i ) jis computed as:EQUATIONwhere m j measures the semantic relatedness of c i and T(w i ) j, which is computed by:EQUATIONwhere [;] indicates the concatenation operations; y(w i ) jdenotes the Chinese word embedding ofT (w i ) j; W r ∈ R d×1 and b r ∈ R are parameter matrix and bias respectively. Finally, we select the candidate which has the maximal attention weight as the best-suited translation for w i , which is denoted by w i . In this way, the original sentence s is transfer into a Chinese word sequences t = {w 1 , w 2 , . . . , w n } with a same length.As English and Chinese usually have different word orders, the transferred result t might be seen as a corrupted sentences from Chinese, which could introduce noise for multilingual co-training. We tackle this problem by proposing a Graph Convolutional Neural Networks (GCNs) (Kipf and Welling, 2016) based syntactic order event detector, which provides each word with a feature vector based on its immediate neighbors in the syntactic graph irrespective of its position in the sentence. This allows our model to train with the translated data t and the other labeled data in Chinese indiscriminately.Specifically, for each token w i , our model computes a graph convolution feature vector based on its immediate neighbors in the syntactic graph. Figure 3 illustrates the process of extracting the feature for "fired".Let N (w i ) denote the set of neighbors of w i in the syntactic graph, and L(w i , v) indicate the label of the dependency arc (w i → v) (For example, L("fired", "hotel") = nmod in the example in Figure 3) . The original GCNs compute a graph convolution vector for w i at (k+1)th layer by:EQUATIONwhereg denotes the ReLU function; W k L(w i ,v)and b k L(w i ,v) are parameters of the dependency label L(w i , v) in the kth layer. However, retaining parameters for every dependency label is space-consuming and compute-intense (there are approximately 50 labels), in our model, we limit L(w i , v) to have only three types of labels 1) an original edge, 2) a self loop edge, and 3) an added inverse edge, as suggested in (Nguyen and Grishman, 2018) . Additionally, since the generated syntactic parsing structures usually contain noise, we apply attention gates on the edges to weigh their individual importances:EQUATIONwhere σ is the logistic sigmoid function. v) are the weight matrix and the bias of the gate. With this gating mechanism, the final syntactic GCNs computation in our model is: Figure 3 : The illustration of using GCNs to compute the order-invariant feature for the word "fired".U k L(w i ,v) and d k (w i ,EQUATION 2  3  4  5 We set the initial vectors h 0 w i for w i as the Chinese word embedding of w i (its translated word), and we stack 2 layers of GCNs (i.e., k = 2) to obtain the final feature for w i , denoted as f i .Our model incorporates a logistic regression classifier to predict w i 's event type. Specifically, we compute a prediction vector for w i by taking f i as the input:EQUATIONwhere W o ∈ R d×c and b o ∈ R c are parameters, and c is the total number of event types (i.e., 34 in this study). The probability of t-th class type is denoted as P (t|w i ), which corresponds to the t-th element of out.To enable multilingual co-training, we adopt the cross-entropy loss, and we use λ to balance the contribution of multilingual resources (which is set as 0.7 through a grid search):EQUATIONwhere Θ denotes all the parameters in our model; w e ranges over each token in the translated examples and w c enumerate each token in the original Chinese training set; l w e and l wc denote the ground-truth event types of w e and w c respectively. We adopt Adam rules (Kingma and Ba, 2014) to update our model's parameters and add dropout layers to prevent over-fitting.
2
In this section we describe (1) the dataset used, (2) the modalities, and (3) our MAST model's architecture. The code for our model is available online 1 .1 https://github.com/amankhullar/mastWe use the 300h version of the How2 dataset (Sanabria et al., 2018) of open-domain videos. The dataset consists of about 300 hours of short instructional videos spanning different domains such as cooking, sports, indoor/outdoor activities, music, and more. A human-generated transcript accompanies each video, and a 2 to 3 sentence summary is available for every video, written to generate interest in a potential viewer. The 300h version is used instead of the 2000h version because the audio modality information is only available for the 300h subset. The dataset is divided into the training, validation and test sets. The training set consists of 13,168 videos totaling 298.2 hours. The validation set consists of 150 videos totaling 3.2 hours, and the test set consists of 175 videos totaling 3.7 hours. A more detailed description of the dataset has been given by Sanabria et al. (2018) . For our experiments, we took 12,798 videos for the training set, 520 videos for the validation set and 127 videos for the test set.We use the following three inputs corresponding to the three different modalities used:• Audio: We use the concatenation of 40dimensional Kaldi (Povey et al., 2011) filter bank features from 16kHz raw audio using a time window of 25ms with 10ms frame shift and the 3-dimensional pitch features extracted from the dataset to obtain the final sequence of 43-dimensional audio features.• Text: We use the transcripts corresponding to each video. All texts are normalized and lower-cased.• Video: We use a 2048-dimensional feature vector per group of 16 frames, which is extracted from the videos using a ResNeXt-101 3D CNN trained to recognize 400 different actions (Hara et al., 2018) . This results in a sequence of feature vectors per video. MAST is a sequence to sequence model that uses information from all three modalities -audio, text and video. The modality information is encoded using Modality Encoders, followed by a Trimodal Hierarchical Attention Layer, which combines this information using a three-level hierarchical attention approach. It attends to two pairs of modalities (δ) (Audio-Text and Video-Text) followed by the modality in each pair (β and γ), followed by the individual features within each modality (α). The decoder utilizes this combination of modalities to generate the output over the vocabulary.modal Hierarchical Attention Layer and the Trimodal Decoder.The text is embedded with an embedding layer and encoded using a bidirectional GRU encoder. The audio and video features are encoded using bidirectional LSTM encoders. This gives us the individual output encoding corresponding to all modalities at each encoder timestep. The tokens ti corresponding to modality k are encoded using the corresponding modality encoders and produce a sequence of hidden states h (k)i for each encoder time step (i).We build upon the hierarchical attention approach proposed by Libovickỳ and Helcl (2017) to combine the modalities. On each decoder timestep i, the attention distribution (α) and the context vector for the k-th modality is first computed indepen-dently as in :e (k) ij = v (k)T a tanh(W (k) a s i + U (k) a h (k) j + b (k) att ) (1) α (k) ij = softmax(e (k) ij ) (2) c (k) i = N k j=1, k∈{audio, text, video} α (k) ij h (k) j (3)Where s i is the decoder hidden state at i-th decoder timestep, h(k)j is the encoder hidden state at j-th encoder timestep, N k is the number of encoder timesteps for the k-th modality and e(k)ij is attention energy corresponding to them. W a and U a are trainable projection matrices, v a is a weight vector and b att is the bias term.We now look at two different strategies of combining information from the modalities. The first is a simple extension of the hierarchical attention combination. The second is the strategy used in MAST, which combines modalities using three levels of hierarchical attention.To obtain our MAST model, the context vectors for audio-text and text-video are combined using a second layer of hierarchical attention mechanisms (β and γ) and their context vectors are computed separately. These context-vectors are then combined using the third hierarchical attention mechanism (δ).1. Audio-Text:e (k) i = v T d tanh(W d s i + U (k) d c (k) i ) (7) β (k) i = softmax(e (k) i ) (8) d (1) i = k∈{audio, text} β (k) i U (k) e c (k) i (9)2. Video-Text:e (k) i = v T f tanh(W f s i + U (k) f c (k) i ) (10) γ (k) i = softmax(e (k) i ) (11) d (2) i = k∈{video, text} γ (k) i U (k) g c (k) i (12) where d (l)i , l ∈ {audio-text, video-text} is the context vector obtained for the corresponding pair-wise modality combination.Finally, these audio-text and video-text context vectors are combined using the third and final attention layer (δ). With this trimodal hierarchical attention architecture, we combine the textual modality twice with the other two modalities in a pair-wise manner, and this allows the model to pay more attention to the textual modality while incorporating the benefits of the other two modalities.EQUATIONcEQUATIONwhere c f i is the final context vector at i-th decoder timestep.We use a GRU-based conditional decoder (Firat and Cho, 2016) to generate the final vocabulary distribution at each timestep. At each timestep, the decoder has the aggregate information from all the modalities. The trimodal decoder focuses on the modality combination, followed by the individual modality, then focuses on the particular information inside that modality. Finally, it uses this information along with information from previous timesteps, which is passed on to two linear layers to generate the next word from the vocabulary.
2
We adapted the time adverbial test (Dowty, 1986; Vendler, 1957; Rothstein, 2008) into forced-choice fill-in-the-blank tests. For human subjects, we ask participants to select "in" or "for" to fill in the blank of a given sentence, e.g.:John loved Mary 2 years. a. in b. for For pretrained transformers, we used maskedlanguage modeling. Specifically, we replace the preposition with the masking token, e.g., John loved Mary [MASK] 2 years and then compare the probability at the [MASK] token of predicting 'for' vs. predicting 'in', and select the preposition with the higher probability.We designed two sets of tests, an English set and a novel-word set (see table 1 ).The English set uses sentences and linguistic features frequently discussed in the literature (Vendler, 1957; Krifka, 1989; Pustejovsky, 1991; van Hout, 1999) . The English set will not only provide insights about differences between human performance and model performance, but also assess variability among human preferences against linguistic theory, as studies have found that human judgments are more variable than theoretical claims (Gibson and Fedorenko, 2010) .The novel-word set uses sentence templates designed to target linguistic factors of interest, but uses novel words for verbs and nouns. Novel words are taken from the ARC nonword database 3 with parameters as only orthographically exisiting onsests and bodies that has 4-6 letters and more than 1 phoneme. We also asked two native English speakers to go over the list to make sure the novel words do not associate with existing meanings. This will allow us to separate the influence of other structural/semantic cues from the verb's telicity preference (since novel verbs will have no known telicity) and the event's typical duration (since novel verbs and nouns will have no known duration).The surveys were administered on Qualtrics 4 . 120 native speakers of English (based on their language profile) were recruited via Prolific 5 for the two sets (60 for each). Participants were paid $2.80 for the 25-minute English survey and $1.80 for the 15minute novel-word survey. We filtered out participants who failed to achieve 90% accuracy on filler questions (non-ambiguous forced-choice questions such as 'John grew roses in/on his garden') or did not complete the survey in time, resulting in 59 participants for the English set and 60 participants for the novel-word set. We tested our surveys on BERT uncased (base and large) and RoBERTa (base and large) models. All four models are pretrained transformer models and are frequently used in the NLP field.To investigate whether humans and transformers attend to the aforementioned linguistic cues, we compare their performance under three measures.Firstly, we used a strict decision measure: we ask whether the use of "for" is significantly different from 50% via Wilcoxon sign-rank tests and visualized responses using violin plots 6 for each category of interest. The error bars indicate the 95% bootstrapped confidence interval over the mean.Secondly, we used a soft ranking measure: we ask whether humans or transformers showed a theoretically-motivated tendency (i.e., significantly greater preference for "for" in items expected to have an atelic preference than in items expected to have a telic preference) by fitting Bayesian mixed effect logistic regression models 7 to predict telicity preference from linguistic cues and temporal unit features. We included a random subject intercept for human participants. In novel word sets for transformers, a random item intercept is included to obtain stable estimates as we repeated many items with only the novel word changed. Features are sliding contrast coded 8 . This coding schema enables the comparison between the mean of predicted variable on one level to the mean of the previous level so that we can see the ranking between features. For instance, if we contrast coded the time unit factor (second, hour, week, year) then, it means the model would compare seconds to hours, hours to weeks and weeks to years. Tables for these tests include one column of coefficients and p-values for each contrast, i.e., for each pairwise comparison.Lastly, we used ablations to evaluate which cues (theoretically-motivated or temporal unit cues) contribute more to explaining participants' responses by comparing the residual change after removing related features via ANOVA tests. Tables for these ablations Table 2 : Influence of temporal units on telicity preference among human subjects and transformers: results of the soft ranking measure described in section 3.3. Figure 1 : Influence of verb type on telicity preference among human subjects and transformers: visualization of the strict decision measure described in section 3.3.
2
The present study is conducted to explore how people in Taiwan, who have Chinese as the L1and English as their foreign language, process time-moving and ego-moving metaphors.The participants are twenty-five English and Chinese bilinguals who are female aged at 31.7.They are chosen, for they have no problem conceptualizing English and Chinese metaphors.Thirty-two test sentences are designed to examine the participants' accuracy. Sixteen of them are in Chinese in which nine used the time-moving metaphors and the others used ego-moving metaphors. As for the other sixteen, they are mostly taken from the study of Gentner, Imai, and Boroditsky (2002) in which eight used the time-moving metaphors and the others used ego-moving metaphors. For example, Christmas is six days ahead of New Year's Day.After the participants read the sample in Chinese and English, they are tested by Chinese test sentences and followed by the English sentences. They see each sentence one at a time by indicating the event 'I will see you' happened in the past or future relative to the reference (4 o'clock). (see Figure 3. ) Totally, there are thirty-two such blocks. The arrangement of all the testing sentences is randomized, so the subjects will not notice the two metaphorical types.I will see you before 4 o'clock.
2
In order to achieve maximum result from automated analysis, a flexible methodology is required that allows dynamic integration and accumulation of knowledge into the automatic analysis process. The analysis we pursue for this purpose is positioned on the qualitative side of the analysis spectrum seen as a scale between quantitative and qualitative analysis (Burdick et al., 2012) . Whereas quantitative analysis applies techniques to derive generalisations from large amounts of data, qualitative analysis is characterised by work to identify specific information on data of smaller scale. Starting from baseline automatic analysis, our bottom-up approach of incrementally adding/changing/deleting text annotations captures an increasing but non-exhaustive body of acquired knowledge. This knowledge can then be used for progressive information filtering in order to obtain a workable search space for further information extraction. These methodological elements highlight the importance of scholarly close reading in this process, leaving the scholar to add, change, or delete any automatically acquired knowledge. The tool and its workflow are solely to assist. From a general perspective, our methodology assists legal researchers and practitioners by means of an incremental automation of legal interpretation. This is done by creating/changing/adding text annotations in any number of iterations, increasingly capturing relevant information for the task of identifying Hohfeldian actors and relations. Where each next iteration requires inclusion of changed or complex knowledge beyond the scope of the present automatic analysis, we introduce a feedback mechanism within the workflow involving both a legal and technical expert and the human interpreter of the legal text. Our approach has the advantage that the complexity of the analysis and any unwanted annotations involved in the process can be withheld to any extent from the scholar if this hinders the interpretation process. Scholarly insight should find its way into the next cyclic application by means of an adaptation of the automatic analysis based on the scholarly feedback. When the resulting annotations are deemed acceptable by the expert, these annotations are then serialized into RDF triples according to a data model. The eventual semantic web oriented goal of this exercise is to link up scholarly activity with the semantic web. At the end of the workflow the semantics of EU directives will be represented by a semantically maximally exhaustive set of RDF triples expressing a complex network of legal vocabulary, facts, statements and relations.The workflow consists of a number of stages. Its foundation is linguistic analysis, after which Hohfeldian concepts are step-wise discovered in a heuristic fashion. Figure 1 shows the main stages of the workflow, which can generally be characterised as involving a cyclic improvement of the automatic annotation process by means of an intervention of the scholar in the form of new/changed/deleted text annotations. Also, detailed nontextual feedback on what information is missing or wrong, will be exploited and operationalised by the text engineer for the further improvement of the automatic processing in the next iteration, by adapting the output of the GATE system according to the experts judgements.The fundamental building blocks of our approach are text annotations. By creating and combining annotations the required patterns emerge. The tool we are working with is GATE 1 (General Architecture for Text Engineering) (Cunningham et al., 2002) , which is an open-source framework for language engineering applications. It provides an interface for viewing, adding, and creating text annotations, which have been produced by a purpose-built automatic text analysis pipeline. GATE ensures repeatability of application pipelines and reusability of the results of previously run applications.linguistic pre-processing Using GATE, we applied existing pipelines to the directives texts for tokenisation, part of speech tagging, lemmatisation and term extraction. This provides us with a normalised linguistic framework for further processing.In order to focus our Hohfeldian analysis we selected important terminology from the directives under consideration. Our reasoning behind this is that in this way we will be able to extract the main Hohfeldian framework for each directive, and discard peripheral Hohfeldian constructs with a minimum of risk. We considered as important only the terms that are explicitly defined in a directive (see Figure 2) , and terms that are used in these definitions.Our linguistic analysis identifies various deontic modalities, which are annotated using a linguistic typology of deontic structures containing standard linguistic descriptors for deontic modality 2 , in our case the GOLD ontology 3 (Farrar and Langendoen, 2003) . This results in the following subtypes:• PermissiveModality 4• ObligativeModality 5In this stage we map our linguistic deontic structures onto the hohfeldian concept Duty. Language analysis through the use of patterns, the Stanford parser (Klein and Manning, 2003 ) (syntactic dependency) and the lexical resource VerbNet (Kipper et al., 2006 ) (semantic information for verbs and their arguments) provides input for heuristics for the identification of relevant role bearers within Hohfeldian constructs. These heuristics are all based on annotation types that have been automatically added to the text. For instance, the syntactic subject of an ObligativeModality structure, within which the main verb requires the thematic role Agent (according to VerbNet), is annotated as both the Agent and the Hohfeldian DutyBearer. In this way we try to identify relations within Hohfeldian Duty constructs which involve any defined term or term definition elements.Working with and evaluating the system output involved a close reading exercise by a legal expert using the GATE graphical user interface as illustrated in figure 3. Close reading of the directives and deleting, adding and changing annotations of the texts yielded the annotations needed for the computation of the initial system performance.One the expert judges the annotations to be correct and complete, the data need to be made available in a semantically explicit data structure. In order to make the results available on the Semantic Web, and thus embed the analysis results in a potentially much wider semantic context, the correct annotation structure is mapped onto the existing Hohfeld ontology (Francesconi, 2015) . The Hohfeldian annotation instances populate the ontology by means of an RDF serialisation, which makes all Hohfeldian constructs available as they are explicitly stated in the text. Implicit Hohfeldian constructs can then be derived from the ontological structure through the opposition and correlation relations, as discussed in section 2..
2
Our proposed approach consists of two stages: knowledge retrieval and knowledge integration. In the knowledge retrieval stage, we retrieve relevant knowledge from external event knowledge bases. In the knowledge integration stage, we integrate the retrieved knowledge into our model to infer final answers. We will first give a brief introduction to ASER knowledge base and show the details in the following sections. Figure 3 : Knowledge Retrieval process. During the lemmatization process, the word "received" will be converted to "receive".ASER (Activities, States, Events and their Relations) (Zhang et al., 2020) is an event knowledge base containing 15 event relation, 194 million unique events and 64 million edges among events. The events and relations are extracted from 11-billion-token unstructured text such as Wikipedia, Gigawords, Book-Corppus, etc. The 15 relations reflect the diverse relations like temporal or causal between events. ASER provides abundant knowledge to understand events and is a good treasure to help resolve the script learning problem.In the knowledge retrieval stage, we aim to select the relevant knowledge about the given event in the script. It can be divided into two parts: locating the event optim event in ASER and select the triples in ASER related to optim event. The first part is similar to entity linking in knowledge graphs. As we can know from the event definition, an event consists of four components, the verb, the subjective, the objective and the prepositional object. It is hard to locate the exact optim event in the knowledge bases. In order to relieve this problem, we propose the first-retrieve-then-rerank method. We first retrieve relevant events according to information retrieval methods. And then we rerank the results according to the component of events and obtain optim event. The process is shown in Figure 3 . There are 194 million events in ASER and it is difficult to obtain optim event. In order to relive the burden on computation cost, we utilize Elastic Search to construct index for all the events. Elastic Search constructs inverted index for all the events in ASER. When we input one event after lemmatization, the Elastic Search engine will return top K events related to the given event, ranked by the TFIDF or BM25 scores. The search results of the event "He received an urgent call" is shown in Figure 3 .Elastic Search engine utilizes TFIDF and BM25 to retrieve relevant events. However, it regards each component in the event with the same weight. As we know, the verb in an event is more important and expresses more semantics than other components. According to this observation, we design a rule-based score function to calculate the similarities between the given event and the retrieved event. If the verb is the same, we will add 4 scores. If other components are the same, we will add 2 scores. Each event in the event list will be assigned a score and the event with the highest score will be optim event according to the given event in the script. In Figure 3 , we can select the event "he do receive a call" as optim event in ASER. Finally, we select the triples in ASER related to optim event as our external event knowledge.It is worth noting that if the given event does not match any event in ASER, Elastic Search engine will also return an event list. We set a score threshold when selecting the most related event. If the scores are all lower than the threshold. We will retrieve no knowledge for the given event. In our experiments, the threshold is set to 4.In this section, we propose three methods to integrate retrieved external event knowledge into our model and compare the effectiveness of different methods.In our experiments, we set K=10.Knowledge RepresentationKnowledge Context Representation Attention <s> 1 ## 2 ## … ## </s> … … 1 score 2 3concat Figure 4 : The knowledge integration methods. e i is the i-th event in the script and e c is the candidate event. Each event is converted into natural language as "e s , e v , e o , e p " format. k i is the i-th triple extracted from ASER and the total number is T . The score is the probability that the model selects the candidate answer. Each canidate event will have a score and the one with the highest score will be selected as the predicted answer.In recent years, pre-trained models like BERT (Devlin et al., 2019) , RoBERTa have achieved great improvements over a variety of downstream tasks such as question answering, machine reading comprehension, sentiment classification, etc. In this paper, we transfer the pre-trained RoBERTabase model to model the event sequences. As shown in Figure 4 , we put the events in the script and the candidate event into a sequence and utilize "##" as the separator. We also two special tokens <s> and "</s>" which denotes the start and end of the input. The representation of <s> is also known as the <cls> representation. In RoBERTa model, the <cls> is the representation of the whole input and it represents the coherence between the script and the candidate event. We utilize <cls> to interact with external event knowledge to help predict the right answer. We denote the <cls> representation for the input as cls sequence . For simplicity, we denote the knowledge triple as < h, r, t >.We regard retrieved external knowledge as memories and we propose to utilize attention mechanism to integrate external event knowledge into our model. We propose three methods to utilize event knowledge: Tail-Only, Event Templates, Representation Fusion.For the Tail Only method, we only utilize the tail event t in a triple, removing the head event h and relation r. We will first utilize RoBERTa-base to get the representation of the tail event cls(t) and then we adopt attention mechanism to integrate external knowledge:EQUATIONwhere cls sequence is the representation of the concatenation of the script and the candidate event. k is the number of external knowledge triples. a i is the normalization weight over all the triples and k i a i = 1. c is the context representation of external knowledge.For the Event Template method, we adopt the templates in ASER to convert the relation r into natural language format. For example, the relation "Precedence" will be converted into "happen before" and the triple "(He does receive a call, Precedence, He departs away)" will be converted into "He does receive a call happen before he departs away". Then we utilize RoBERTa-base to get the representation of the triple in natural language format and utilize attention mechanism to aggregate external knowledge:EQUATIONwhere cls sequence , a i and c have similar meanings to those in Tail Only method. For the Representation Fusion method, we assign a vector representation for each relation in ASER. Each relation will have the same representation in all the triples containing it. The head event and tail event representation is obtained by RoBERTa-base. Next we fuse the representation of head event, tail event to get the triple representation. For each triple < h, r, t >:EQUATIONwhere relation embedding ∈ R p * hidden , p is the category of relations, hidden is the hidden size. U ∈ R p denotes the category of relation, with only one position is 1 and others are 0. We obtain the embedding of r according to its index in the matrix.[;] denotes the concatenation operation. Next, we aggregate the external triple representations to get the context representation:EQUATIONwhere cls sequence , a i and c have similar meanings to those in Tail Only method.In the three methods in Section 4.3, we all obtain the context representation c of external knowledge sources. And then we fuse c and the sequence representation cls sequence to get the final score for each candidate choice.EQUATIONwhere m is the number of candidate events and the linear function converts the representation from R 2 * hidden to R 1 , which stands the assigned score to the candidate event. We will perform experiments to verify the effectiveness of three methods. Finally, we select the candidate event with the highest score as the predicted next event.EQUATIONOur goal is to minimize the cross-entropy loss between the right answers and the predicted answers given an event chain and a set of choice events. We define the loss function as follows:EQUATIONwhere N is the number of training instances, m is the number of choices in each instance. f y j = P (e c j |e 1 , e 2 , • • • , e n ) and the i-th choice is the right answer.
2
In this section, we describe the proposed Joint Contrastive Learning (JointCL) framework for zeroshot stance detection in detail. As demonstrated in Figure 1 , the architecture of the JointCL framework contains four main components: 1) stance contrastive learning, which performs contrastive learning based on the supervised signal of stance labels for better generalization of stance features; 2) prototypes generation, which derives the prototypes of the training data by a clustering method; 3) target-aware prototypical graph contrastive learning, which performs the edge-oriented graph contrastive learning strategy based on the target-aware prototypical graphs for sharing the graph structures between known targets and unseen ones; 4) classifier, which detects the stances of targets based on the hidden vectors and graph representations.Formally, let D s = {(r i s , t i s , y i s )} Ns i=1be the training set for the source targets, where t i s and y i s are the training target and the stance label towards the context r i s respectively. N s is the number of the training instances. Further, letD d = {(r i d , t i d )} N d i=1be the testing set for the targets which are unseen in the training set. Here, t i d is the testing target in the context r i d . The goal of ZSSD is to predict a stance label (e.g. "Pro", "Con", or "Neutral") of each testing instance by training a model on the training set.Given a sequence of words r = {w i } n i=1 and the corresponding target t, where n is the length of the sentence r, we adopt a pre-trained BERT (Devlin et al., 2019) h = BERT([CLS]r[SEP ]t[SEP ]) [CLS] (1)Here, we use the vector of the [CLS] token to represent the input instance. For the training set D s , the hidden representations of the training instances can be represented asH = {h i } Ns i=1 .As previously discussed in Gunel et al. (2021) , good generalization requires capturing the similarity between examples in one class and contrasting them with examples in other classes. To improve the generalization ability of stance learning, we define a stance contrastive loss on the hidden vectors of instances with the supervised stance label information. Given the hidden vectors{h i } N b i=1 in a mini-batch B (here, N bis the size of mini-batch), and an anchor of hidden vector h i , h i , h j ∈ B with the same stance label is considered as a positive pair, i.e. y i = y j , where y i and y j are the stance labels of h i and h j , respectively, while the samples {h k ∈ B, k ̸ = i} are treated as negative representations with respect to the anchor. Then the contrastive loss is computed across all positive pairs, both (h i , h j ) and (h j , h i ) in a mini-batch:EQUATIONwhere1 [i=j] ∈ {0, 1}is an indicator function eval-uating to 1 iff i = j. f (u, v) = sim(u, v) = u ⊤ v/∥u∥∥v∥ denotes the cosine similarity between vectors u and v.In the Prototypical Networks for few-shot learning, Snell et al. (2017) derived the prototype of each class by computing the mean vector of the embedded support points belonging to the class. However, in the ZSSD data, the distribution of targets is usually imbalanced. Therefore, inspired by , we perform k-means clustering on the hidden vectors of the training instances H = {h i } Ns i=1 to generate k clusters as the prototypes C = {c i } k i=1 with respect to the target-based representations of training set. Here, a prototype is defined as a representative embedding for a group of semantically similar instances . Clustering is performed at each training epoch to update the prototypes.Once the prototypes are generated, a prototypical graph is constructed to capture the relationships between the prototypes and the known targets. This enables the learning of the representation of a target-based instance by modeling the different weights of edges between its corresponding target and various prototypes, so as to generalize the learned graph information to the unseen targets. Here, the prototypes and the targetbased representations are updated in an alternative manner. For a hidden vector h i of a training instance i, we first treat the prototypes C and the hidden vector h i as nodes of the prototypical graph:X = [c 1 , c 2 , • • • , c k , h i ], and then construct the adjacency matrix G ∈ R (k+1)×(k+1) of the fullyconnected graph, G i,j = G j,i = 1.Next, we feed the nodes X and the corresponding adjacency matrix G into a graph attention network (GAT) (Velickovic et al., 2018) to derive the attention scores α i and the graph representation z i for the target-based instance i:EQUATIONwhere GAT(•) represents GAT operation. a(•) denotes retrieving the attention score matrix from the GAT operation, f (•) denotes retrieving the graph representation for h i .From the target-aware perspective, we further explore a Target-Aware Prototypical Graph Contrastive Learning strategy, aiming at generalizing the graph structures learned from the known targets to the unseen ones. Specifically, for the attention matrices {α i } N b i=1 in each mini-batch B, we devise a novel edge-oriented prototypical graph contrastive loss, making the graph structure of similar target-based representations to be similar. This essentially allows the model to learn the representations of (unseen) targets through the prototypes, thus generalizing the target-aware stance information to the unseen targets.For an anchor instance i with edge weights (i.e., the attention score matrix) α i , we construct a positive pair (α i , α j ) by retrieving the attention score matrix of instance j which is either about the same target or has been assigned to the same prototype, and expresses the same stance as i. We also construct negative pairs,(α i , α k ), α k ∈ B, k ̸ = i.Then, the edge-oriented graph contrastive loss is defined as 2 :EQUATIONwhere p i = p j represents the instances i and j correspond to the same target or belong to the same prototype, and express the same stance. The calculation of the stance and edge-oriented prototypical graph contrastive losses for each minibatch B is illustrated in Algorithm 1.For each instance i, we first concatenate the hidden vector h i and the graph representation z i to get the output representation v i towards the instance i:EQUATIONThen the output representation v i is fed into a classifier with a softmax function to produce the pre-Algorithm 1: Calculation of the stance and edge-oriented prototypical graph contrastive losses for each mini-batch B.Input:B = {hi, αi} N b i=1 , ℓ s , ℓ g ← 0, 0 Output: Lstance, L graph 1 for i = 1 to N b do 2 hi, αi ← B 3 ℓ s (hi)pos, ℓ s (hi)neg ← 0, 0 4 ℓ g (αi)pos, ℓ g (αi)neg ← 0, 0 5 for j = 1 to N b and j ̸ = i do 6 hj, αj ← B 7 if y i == y j then 8 ℓ s (hi)pos+ = exp(f (hi, hj)/τs) 9 if p i == p j and y i == y j then 10 ℓ g (αi)pos+ = exp(f (αi, αj)/τg) 11 ℓ s (hi)neg+ = exp(f (hi, hj)/τs) 12 ℓ g (αi)neg+ = exp(f (αi, αj)/τg) 13 ▷ Computing stance contrastive loss for each hi 14 ℓ s + = ℓ s (hi)pos/ℓ s (hi)neg 15▷ Computing edge-oriented prototypical graph contrastive loss for each αiEQUATIONwhere d y is the dimensionality of stance labels. W ∈ R dy×dm and b ∈ R dy are trainable parameters. We adopt a cross-entropy loss between predicted distributionŷ i and ground-truth distribution y i of instance i to train the classifier:EQUATIONThe learning objective of our proposed model is to train the model by jointly minimizing the three losses generated by stance detection, stance contrastive learning, and target-aware prototypical graph contrastive learning. The overall loss L is formulated by summing up three losses:L = γcL class + γsLstance + γgL graph + λ||Θ|| 2 (12)where γ c , γ s and γ g are tuned hyper-parameters. Θ denotes all trainable parameters of the model, λ represents the coefficient of L 2 -regularization. 4 Experimental Setup
2
In this section, we describe the proposed method in detail. The main notations used are as follows:D l = {(x i , y i )} n l i=1 and D u = {(x i )} nu i=1denote the labeled and unlabeled sets, respectively. x i and y i are the i-th sample and its true label, respectively, and n l and n u are the numbers of labeled and unlabeled samples, respectively. p ij denotes the prediction probability of the j-th class on the ith sample. Let C be the true class distribution of the samples. The output probability (i.e., confidence) p i associated with the predicted label on sample x i and the predicted (i.e., output) class distributionĈ of the samples are defined as follows:p i =max j (p ij ) C[j] = n data i=1 p ij /n data where ∀j ∈ {1,• • •, n c };n c is the number of classes.The pseudocode for the preliminary stage is summarized in Alg. 1. In the preliminary stage, the prediction confidences P l for the labeled samples in D l and the estimated class distribution C u of the unlabeled set D u are calculated. Using D l , the model is reinitialized-and-retrained T -times using a resampling method such as cross-validation. In low-resource settings, such retraining enables more reliable predictions by averaging the results. Each sample in P l is evaluated when the validation loss is the lowest. Each sample should be validated at least once; the prediction confidences are averaged for each sample. P l (and P u in Alg.2 as well) is sorted in order of size for confidence comparison between two different sample sets, D l and D u , in the main stage; we denoted it as P l ( P u for P u ). When retraining T -times, the output class distributions of the unlabeled set D u are obtained and calibrated (this calibration is defined in Section 3.3). Then, the T calibrated class distributions are averaged, resulting in C u . After this stage, P l and C u are used to calculate the similarities for the two stop criteria, conf-sim and class-sim, respectively.After the preliminary stage, we train all the labeled samples and refer to this stage as the main stage. The combined BUS-stop method applied in the main stage is summarized in Alg. 2. The unlabeled set is predicted at every epoch during training.Conf-sim The first proposed stop criterion confsim S conf represents the similarity of the prediction confidences P u for the unlabeled samples with the reference confidences P l . To calculate the similarity between P u and P l , their dimensions must be the same. We sample P u at regular intervals nu n l such that it is the same size as P l and denoted it as ... P u . We use the Euclidean distance to calculate the similarity, resulting in S conf . Then, the first stop criterion is when S conf has the lowest value, i.e., ... P u is most similar to P l . There is a natural concern that ... P u is likely to produce higher (thus dissimilar) confidences than P l because ... P u is obtained by training all the labeled samples, unlike P l . However, the fact that the confidence for each sample in P l is obtained when the validation error is the lowest can alleviate this concern. Thereby, S conf can be a rough criterion for avoiding under-and overfitting, and can reflect the trend of the loss, based on comparison with the reference confidences.Input: D l , D u , P l , C u Output: Expected best model M best Let Queue[1 • • • n que ] = 0Let B conf = inf, andn pat = 0 Initialize a model, M for epoch ∈ {1, 2, 3, • • • } do Train the M one epoch on D l P u ,Ĉ u ← M (D u ) P u ← sort P u in ascending (or descending) order ... P u ← sampling P u at regular intervals nu n l S conf = Euclidian-distance( ... P u , P l ) S class = Cosine-similarity(Ĉ u , C u ) if S conf < B conf then n pat = 0 and Queue[1 • • • n que ] = 0 B conf = S conf else n pat = n pat + 1 end if if n pat < n que then if S class > max(Queue) then M best ← save the current M end if Queue dequeue & ← −−−−− − enqueue S class elseEnd training end if end for return M best Class-sim The second proposed stop criterion is class-sim, S class . The predicted class distribu-tionĈ u on the unlabeled set is compared with the estimated class distribution C u from the preliminary stage. The assumption is that a well-trained model can also predict the class distribution more accurately. Therefore, estimation of the true class distribution is crucial. A calibration method that facilitates better estimation of the class distribution is presented in Section 3.3. We use the cosine similarity to calculate the similarity betweenĈ u and C u , and obtain S class . The second stop criterion is when S class has the highest value, i.e.,Ĉ u is most similar to C u . Thereby, S class can reflect the short-term trend of the accuracy because it is more likely that the outputs of a higher accuracy model are closer to the true class distribution.BUS-stop Finally, we combine the two stopcriteria, conf-sim and class-sim, to form the BUSstop method, as depicted in Alg. product of the two stop criteria can be an ineffective stop criterion because the sizes of S conf and S class are relative. Our combined stop-criterion is to save the model with the highest S class among of the epochs from the lowest S conf to the subsequent (n que −1)-th epoch. This technique enables fine-stopping by considering both S conf and S class , which reflect the long-term and short-term performances, respectively. It is to be noted that early stopping methods should be operated as an ongoing process, and not as a type of post-hoc method. To this end, we use a fixed-size queue Queue, and its size n que as a hyperparameter, as shown in Alg. 2.In this section, we describe the calibration of the predicted class distribution. The calibration method aims to better estimate the true class distribution of the unlabeled set, thereby improving the performance of class-sim, particularly for imbalanced classification.Trained neural networks often involve sampling biases. For example, in binary classification, the prediction results of a model trained with a class ratio a:b tend to follow the distribution of a:b. Thus, when the class distributions are different in the test and training sets, the model performance can deteriorate. Let us suppose the following somewhat ideal and naive situations. Let C u be the true class distribution of the unlabeled set. If the model is perfectly trained with an accuracy of 1.0, the output class distribution will be equal to C u . On the other hand, if the model fails to learn any inference knowledge from training, the model will output the predictions only by its sampling bias; i.e., when the accuracy is the same as the random expectation (denoted as Acc min , e.g., 0.5 in binary classification), the output class distribution will be equal to the sampling bias B. Thus, the model accuracy can reflect whether the output class distribution is closer to the sampling bias or the true distribution. In the preliminary stage, we obtained the models' proxy accuracy and output class distribution as Acc val andĈ u , respectively. Assuming that there is an approximate linear relationship, we can define a proportional expression as follows:EQUATIONWe rearrange the above expression in terms of C u :C u ≈ B + (1 − Acc min ) (Acc val − Acc min ) (Ĉ u − B) (2)Then, we denote the approximation of C u as C u .Considering the class distribution as a vector, Eq.(2) is a type of extrapolation. B can be defined as the class distribution of D train or the predicted distribution in the validation set,Ĉ val , of the preliminary stage. In addition, the Acc can be replaced with F1-score. Fig. 1 illustrates an example of our calibration method.
2
As mentioned earlier, existing datasets contain clear labels only for positive sentences. Due to the variability of human choices in composing a summary, unlabeled sentences cannot be simply treated as negative. For our supervised approach to sentence importance detection, a semi-supervised approach is first employed to establish labels.Learning from positive (e.g., important in this paper) and unlabeled samples can be achieved by the methods proposed in (Lee and Liu, 2003; Elkan and Noto, 2008) . Following (Elkan and Noto, 2008) , we use a two-stage approach to train a detector of sentence importance from positive and unlabeled examples. Let y be the importance prediction for a sample, where y = 1 is expected for any positive sample and y = 0 for any negative sample. Let o be the ground-truth labels obtained by the method described in Section 2, where o = 1 means that the sentence is labeled as positive (important) and o = 0 means unlabeled.In the first stage, we build an estimator e, equal to the probability that a sample is predicted as positive given that it is indeed positive, p(o = 1|y = 1). We first train a logistic regression (LR) classier with positive and unlabeled samples, treating the unlabeled samples as negative. Then e can be estimated as Σ x∈P (LR(x)/|P |), where P is the set of all labeled positive samples, and LR(x) is the probability of a sample x being positive, as predicted by the LR classifier. We then calculate p(y = 1|o = 0) using the estimator e, the probability for an unlabeled sample to be positive as: w = LR(x) e / 1−LR(x) 1−e . A large w means an unlabeled sample is likely to be positive, whereas a small w means the sample is likely to be negative.In the second stage, a new dataset is constructed from the original dataset. We first make two copies of every unlabeled sample, assigning the label 1 with weight w to one copy and the label 0 with weight 1 − w to the other. Positive samples remain the same and the weight for each positive sample is 1. We call this dataset the relabeled data.We train a SVM classifier with linear kernel on the relabeled data. This is our final detector of important/unimportant sentences.The classifiers for both stages use dictionaryderived features which indicate the types / properties of a word, along with several general features.MRC The MRC Psycholinguistic Database (Wilson, 1988 ) is a collection of word lists with associated word attributes according to judgements by multiple people. The degree to which a word is associated with an attribute is given as a score within a range. We divide the score range into 230 intervals. The number of intervals was decided empirically on a small development set and was inspired by prior work of feature engineering for real valued scores (Beigman Klebanov et al., 2013) . Each interval corresponds to a feature; the value of the feature is the fraction of words in a sentence whose score belongs to this interval. Six attributes are selected: imagery, concreteness, familiarity, age-of-acquisition, and two meaningfulness attributes. In total, there are 1,380 MRC features.LIWC LIWC is a dictionary that groups words in different categories, such as positive or negative emotions, self-reference etc. and other language dimensions relevant in the analysis of psychological states. Sentences are represented by a histogram of categories, indicating the percentage of words in the sentence associated with each category. We employ LIWC2007 English dictionary which contains 4,553 words with 64 categories.INQUIRER The General Inquirer (Stone et al., 1962) is another dictionary of 7,444 words, grouped in 182 general semantic categories. For instance, the word absurd is mapped to tags NEG and VICE. Again, a sentence is represented with the histogram of categories occurring in the sentence.We also include features that capture general attributes of sentences including: total number of tokens, number of punctuation marks, if it contains exclamation marks, if it contains question marks, if it contains colons, if it contains double quotations.
2
As a starting corpus, we take CINTIL (Barreto et al., 2006) , a corpus with approximately 1 million tokens, already annotated with manually verified information on part-ofspeech, morphology and named entities, and add labeled syntactic dependency relations by automatically analysing it with the LX-DepParser dependency parser. 1 This tentative annotation is then manually corrected by experts in Linguistics. Evaluation of the automatic parsing against the outcome of the manual annotation, using the Labeled Attachment Score (LAS) metric (Nivre et al., 2007a) , shows that 69.70% of the automatic annotations were assigned correctly, thus greatly reducing the amount of subsequent manual correction work that is needed.Manual correction is supported by WebAnno, 2 an opensource, general-purpose, web-based annotation system (Yimam et al., 2013). It possesses a set of design features that are useful for the annotation we need to carry out: Web-Anno allows creating an annotation project and fully customize it by specifying each annotation layer in terms of its set of valid tags and type-i.e. whether the layer contains a tag per word (e.g. POS), or assigns tags to spans of words (e.g. named entities), or is composed of relations between words (e.g. dependencies). The annotated files are stored in a simple format, which allows us to convert our annotated corpus into that format and import it into WebAnno. After annotation, it is also straightforward to convert the resulting files into a standard format, such as CoNLL. The annotation process itself is supported by a user-friendly and intuitive interface that allows editing a tag by clicking on it and defining a dependency relation between words by dragging an arc between them (Figure 1 shows a sentence as viewed in WebAnno). 3 Being web-based, WebAnno runs directly in a browser, which means that it is not necessary to install any specific software on the machines used by the annotators and that all annotated files are automatically stored in the server. This is coupled with a project management design feature that allows for the administrator of a project to distribute the files to be annotated among the annotators, and a curation feature that automatically finds mismatches between annotators.To ensure a reliable linguistically interpreted data set, manual correction is done by two annotators working under a double-blind scheme, and is followed by a phase of data curation where a third annotator adjudicates any mismatches. All the annotators have graduate or postgraduate education in Linguistics or similar fields and follow specific guidelines elaborated for the task (Branco et al., 2015) . Agreement between annotators, measured by taking the data produced by one of the annotators as reference and comparing it with that of the other annotator, is at 88.60% LAS.
2
We focus on the conditional language modeling problem in open-ended text generation tasks. Formally, given an input context x, models are required to generate a sentence y that is consistent with input context and not contradicts itself.In this work, we propose a two-stage model for the generation process. In the first stage, we extract the starting event sequence r x from the input context and employ the event transition planer to generates subsequent event transition path r y based on r x . In the second stage, the output text is generated from an auto-regressive model conditioning on the path and the preceding context x. Figure 2 gives an overview of our coarse-to-fine framework for open-ended text generation. In a nutshell, we first fine-tune a GPT-2 on event transition sequences as an event planner (i.e., a conditional generative model for event paths). This fine-tuning involves event transition sequences extracted from both commonsense graphs and the training set. We then build a path-aware text generator with an event query layer specifically designed to refer to the planned path when generating the output.In this section, we describe the event transition planner which completes the partial event path given certain input context. Pre-trained language models can be good representation learners of relational knowledge (Petroni et al., 2019; Bosselut et al., 2019) . In our model, we choose GPT-2 (Radford et al., 2019) as the backbone of our event transition planner.Specifically, we first fine-tune GPT-2 with large-scaled event transition paths sampled from ATOMIC . After that, we fine-tune the resulting model in addition on the event transitions extracted from the training corpus, so that the planner is aware of general transitions in the commonsense while focusing on the transitions in the specific domain in the meantime.In preliminary experiments, we find that directly running a full fine-tuning (i.e., updating all GPT-2 parameters) leads to a drop in the final performance. We suspect the reason is the full fine-tuning flushes out the original general knowledge from the largescale pre-training (Chen et al., 2019; Lee et al., 2020; .To overcome this drawback, we prepend a trainable continuous event prompt z to the input path r = [r x ; r y ] of every transformer layer in event transition planner, as prefix-tuning (Li and Liang, 2021) does. A trainable matrix U θ with parameters θ is randomly initialized to embed event prompt z. The aim is to use parameters θ introduced by z to store event transition patterns from ATOMIC. Then the representation of each input event transition path r is prompted as r = [z; r]. To increase training speed and performance robustness, we apply an additional linear reparameterization function on U θ .EQUATIONwhere U θ is another randomly initialized matrix with smaller dimension, FFN is a large feedforward neural network (Vaswani et al., 2017) . We perform gradient updates on the following log-likelihood objective:EQUATIONwhere φ denotes the pre-trained parameters from the backbone LM of event transition planner, θ denotes the newly introduced parameters for the event prompt, z idx denotes the index sequence of the event prompt, EP is short for event transition planner, and h <y denotes the hidden states calculated by the trainable event prompt matrix and activation layers of the backbone LM:h y = U θ [y, :], if y ∈ z idx , LM φ (r y | h <y ) otherwise.(3)Similar to the above event prompting technique, for the paths from downstream dataset, we prepend another event prompt z to the r and only optimize the parameters introduced by z . This effectively preserves the newly-learned event transition patterns from ATOMIC and continuously adapts the event transition planner to different downstream event transition patterns.Current state-of-the-art systems for open-ended text generation are based on fine-tuning pretrained language models with different downstream datasets. Although text generation fluency is usually not a crucial issue nowadays, topic-related mistakes (Dou et al., 2021 ) such as off-prompt and self-contradiction are common. We therefore integrate the event transition paths produced by the planner into the text generation model via an event query layer using the multi-head attention mechanism (MHA).The event query layer is built on top of the stacked transformer layers, aiming to explicitly induce the expected output with event transition paths. The input of the event query layer is the event transition path r given the current input x. r not only summarizes the event transition in x, also indicates possible event path following x. The structure of the event query layer resembles the transformer layer. Its output serves as the key and value vectors in the multi-head attention mechanism, which computes another attention vector MHA(r). We concatenate two multi-head attention vectors and derive the final event-path-aware attention vector m:EQUATIONwhere MHA(x) is the output from the multi-head attention function of the original transformer layer, MHA(r) is the output from the event query layer. The event-path-aware attention vector m replaces the original multi-head attention vector MHA(x) and participates the remaining calculation of the language model.The optimization of the event-path-aware text generator is the standard cross-entropy objective: CrossEntropy(y j | y <j , x, r).We base our event planner and event-plan-aware text generator on pre-trained GPT-2-small models 3 . The event prompt length during training ATOMIC event transition paths are set to 5 according to pilot study. We inject and optimize the event query layer on the last layer of the stacked Transformers. When training the event-path-aware text generator, event path r y is derived from the ground truth. During inference, r y is the prediction from event transition planner given the input event transition path r x . More details are elaborated in Appendix B.
2
Given a source document, it is processed by our pipeline that: (i) with the help of The Wiki Machine, it identifies, disambiguates and links all terms in the document to the Wikipedia pages; (ii) the terms and their links are used to identify the domain of the document and filter out the terms that are not domainspecific; (iii) the translation of such terms is obtained following the Wikipedia cross-lingual links; (iv) the bilingual domain-specific terms are embedded into the SMT system using different strategies. In the rest of this section, each step is described in detail.Term Detection and Linking The Wiki Machine is a tool for linking terms in text to Wikipedia pages and enriching them with information extracted from Wikipedia and Linked Open Data (LOD) resources such as DBPedia or Freebase. The Wiki Machine has been preferred among other approaches because it achieves the best performance in term disambiguation and linking (Mendes et al., 2011) , and facilitates the extraction of structured information from Wikipedia.The annotation process consists of a three-step pipeline based on statistical and machine learning methods that exclusively uses Wikipedia to train the models. No linguistic processing, such as stemming, morphology analysis, POS tagging, or parsing, is performed. This choice facilitates the portability of the system as the only requirement is the existence of a Wikipedia version with a sufficient coverage for the specific language and domain. The first step identifies and ranks the terms by relevance using a simple statistical approach based on tf-idf weighting, where all the n-grams, for n from 1 to 10, are generated and the idf is directly calculated on Wikipedia pages. The second step links the terms to Wikipedia pages. The linking problem is cast as a supervised word sense disambiguation problem, in which the terms must be disambiguated using Wikipedia to provide the sense inventory and the training data (for each sense, a list of phrases where the term appears) as first introduced in (Mihalcea, 2007) . The application uses an ensemble of word-expert classifiers that are implemented using the kernel-based approach (Giuliano et al., 2009) . Specifically, domain and syntagmatic aspects of sense distinction are modelled by means of a combination of the latent semantic and string kernels (Shawe-Taylor and Cristianini, 2004) . The third step enriches the linked terms using information extracted from Wikipedia and LOD resources. The additional information relative to the pair term/Wikipedia page consists of alternative terms (i.e., orthographical and morphological variants, synonyms, and related terms), images, topic, type, cross language links, etc. For example, in the text "click right mouse key to pop up menu and Gnome panel", The Wiki Machine identifies the terms mouse, key, pop up menu and Gnome panel. For the ambiguous term mouse, the linking algorithm returns the Wikipedia page 'Mouse (computing)', and the other terms used to link that page in Wikipedia with their frequency, i.e., computer mouse, mice, and Mouse.In the context of the experiments reported here, we were specifically interested in the identification of domain-specific bilingual terminology to be embedded into the SMT system. For this reason, we extend The Wiki Machine adding the functionality of filtering out terms that do not belong to the document domain, and of automatically retrieving term translations.To identify specific terms, we assign a domain to each linked term in a text, after that we obtain the most frequent domain and filter out the terms that are out of scope. In the example above, the term mouse is accepted because it belongs to the domain computer science, as the majority of terms (mouse, pop up menu and Gnome panel), while the term key in the domain music is rejected.The large number of languages and domains to cover prevents us from using standard text classification techniques to categorize the document. For this reason, we implemented an approach based on the mapping of the Wikipedia categories into the WordNet domains (Bentivogli et al., 2004) . The Wikipedia categories are created and assigned by different human editors, and are therefore less rigorous, coherent and consistent than usual ontologies. In addition, the Wikipedia's category hierarchy forms a cyclic graph (Zesch and Gurevych, 2007) that limits its usability. Instead, the WordNet domains are organized in a hierarchy that contains only 164 items with a degree of granularity that makes them suitable for Natural Language Processing tasks. The approach we are proposing overcomes the Wikipedia category sparsity, allows us reducing the number of domains to few tens instead of some hundred thousands (800,000 categories in the English Wikipedia) and does not require any language-specific training data. Wikipedia categories that contain more pages (∼1,000) have been manually mapped to WordNet domains. The domain for a term is obtained as follows. First, for each term, we extract its set of categories, C, from the Wikipedia page linked to it. Second, by means of a recursive procedure, all possible outgoing paths (usually in a large number) from each category in C are followed in the graph of Wikipedia categories. When one of the mapped categories to a WordNet domain is found, the approach stops and associates the relative WordNet domain to the term. In this way, more and more domains are assigned to a single term. Third, to isolate the most relevant one, these domains are ranked according the number of times they have been found following all the paths. The most frequent domain is assigned to the terms. Although this process needs the human intervention for the manual mapping, it is done once and it is less demanding than annotating large amounts of training documents for text classification, because it does not require the reading of the document for topic identification.Bilingual Term Extraction The last phase consists in finding the translation of the domain terminology. We exploit the Wikipedia cross-language links, which, however, provide an alignment at page level not at term level. To deal with this issue we introduced the following procedure. If the term is equal to the source page title (ignoring case) we return the target page; otherwise, we return the most frequent alternative form of the term in the target language. From the previous example, the system is able to return the Italian page Mouse and all terms used in the Italian Wikipedia to express this concept of Mouse in computer science. Using this information, the term mouse is paired with its translation into Italian.A straightforward approach for adding bilingual terms to the SMT system consists of concatenating the training data and the terms. Although it has been shown to perform better than more complex techniques (Bouamor et al., 2012) , it is still affected by major disadvantages that limits its use in real applications. In particular, when small amounts of bilingual terms are concatenated with a large training dataset, terms with ambiguous translations are penalised, because the most frequent and general translations often receive the highest probability, which drives the SMT system to ignore specific translations.In this paper, we focus on two techniques that give more priority to specific translations than generic ones: the Fill-Up model and the XML markup approach. The Fill-Up model has been developed to address a common scenario where a large generic background model exists, and only a small quantity of in-domain data can be used to build an in-domain model. Its goal is to leverage the large coverage of the background model, while preserving the domain-specific knowledge coming from the in-domain data. Given the generic and the in-domain phrase tables, they are merged. For those phrase pairs that appear in both tables, only one instance is reported in the Fill-Up model with the largest probabilities according to the tables. To keep track of a phrase pair's provenance, a binary feature that penalises if the phrase pair comes from the background table is added. The same strategy is used for reordering tables. In our experiments, we use the bilingual terms identified from the source data as in-domain data. Word alignments are computed on the concatenation of the data. Phrase extraction and scoring are carried out separately on each corpus. The XML markup approach makes it possible to directly pass external knowledge to the decoder, specifying translations for particular spans of the source sentence. In our scenario, the source term is used to identify a span in the source sentence, while the target term is directly passed to the decoder. With the setting exclusive, the decoder uses only the specified translations ignoring other possible translations in the translation model.
2
In this section, we present the technical details of UNIST, our unified framework for semantic typing. We first provide a general definition of a semantic typing problem ( §2.1), followed by a detailed description of our model ( §2.2), training objective( §2.3), and inference ( §2.4).Given an input sentence s and a set of one or more token spans of interest E = {e 1 , ...e n }, e i ⊂ s, the goal of semantic typing is to assign a set of one or more labels Y = {y 1 , ...y k }, Y ⊂ Y to E that best describes the semantic category E belongs to in the context of s. Y denotes the set of candidate labels, which may include a large number of free-form phrases (Choi et al., 2018) or ontological labels (Zhang et al., 2017) . In this paper, we consider two categories of semantic typing tasks, lexical typing of a single token span (e.g., entity or event typing), and relational typing between two token spans (relation classification).Overview. As illustrated in Fig. 1 , UNIST leverages a pre-trained language model (PLM) to project both input sentences and the candidate labels into a shared semantic embedding space, where the semantic relatedness between the input and label is reflected by their embedding similarity. This is accomplished by optimizing a margin ranking objective that pushes negative labels away from the input sentence while pulling the positive labels towards the input. This simple, unified paradigm allows our model to rank candidate labels based on the affinity of semantic representations with regard to the input during inference. Meanwhile, our model is not limited to a pre-defined label set, as any textual label, whether seen or unseen during training, can be ranked accordingly as long as the model captures its semantic representation. In order to specify the task at hand along with the tokens (or spans) we aim to classify, we add a task description to the end of the input sentence. This allows our framework to use unified representations from a single encoder for both inputs and labels, as well as support the inference of distinct semantic typing tasks without introducing task-specific model components.Task Description. To highlight the tokens (or spans) we aim to type, we first enclose them with special marker tokens indicating their roles (entities, subjects, objects, or triggers). Next, we leverage the existing semantic knowledge in PLMs and add a natural language task description to the end of the input sentence to specify the task at hand along with tokens (or spans) of interest. The general format for lexical semantic typing isDescribe the type of <tokens>.and that of relational semantic typing isDescribe the relationship between <subject> and <object>.Examples of different input formats (including special tokens and task descriptions) can be found in Tab. 1. In addition, relational typing (relation classification) tasks may incorporate entity types from NER models alongside input sentences. Entity type information has been shown to benefit relation classification Zhong and Chen, 2021; Zhou and Chen, 2021a) , and can be easily incorporated into our task description, as shown in the given example.Input Representation. We use a RoBERTa model to jointly encode the input sentence and the task description. Given an input s and its task description d, we concatenate s and d into a single sequence, and obtain the hidden representation of the <s> token as the input sentence representation, denoted by u:u = f encoder ([s, d]).A traditional approach to semantic typing is to train classifiers on top of the representations of specific tokens of interest (Wang et al., 2021a; Yamada et al., 2020) . In the case of relational typing where two entities are involved, their representations are usually concatenated, leading to dimension mismatch with lexical typing tasks and requiring a different task-specific module to handle. Instead, thanks to the introduction of task description, UNIST always uses the universal <s> token representation for both inputs and labels, and across different semantic typing tasks.Label Representation. Most semantic typing tasks provide textual labels in natural language from which a language model can directly capture label semantics. Some relation classification datasets such as TACRED use extra identifiers per: and org: to distinguish same relation type with different subject types. For example, per:parent refers to the parent of a person, while org:parent represents the parent of an organization such as a company. In this case, we simply replace per: and org: with person and organization respectively. The label text is encoded by the exact same model used to encode the input sentence. Given the label y, we again take the <s> token representation as the label representation, denoted by v: v = f encoder (y).Let Y be the set of all candidates labels for a semantic typing task. Given an input [s, d] and the positive label set Y ⊂ Y, we first randomly sample a negative label y ′ ∈ Y\Y for each training instance. Then, we encode the input [s, d] , positive label y and negative label y ′ into their respective semantic representations u, v, and v ′ . UNIST optimizes a margin ranking loss such that positive labels, which are more semantically related to the input than negative labels, are also closer to the input in the embedding space. Specifically, the loss function for a single training instance is defined as:L s,y,y ′ = max{c(u, v ′ ) − c(u, v) + γ, 0},where c(•) denotes cosine similarity and γ is a nonnegative constant. The overall (single-task) training objective is given by:L t = 1 N t s∈St y∈Ys L s,y,y ′ ,where S t is the set of training instances for task t, Y s is the set of all positive labels of s, and N t is the number of distinct pairs of training sentence and positive label. In addition to the single-task setting which optimizes an individual task-specific loss L t , we also consider a multi-task setting of UNIST where it is jointly trained on different semantic typing tasks and optimizes the following objective:L = 1 N t∈T s∈St y∈Ys L s,y,y ′ .where T is the set of semantic typing tasks UNIST is trained on, and N is the total number of training instances.UNIST supports different strategies for inference depending on the task requirement. If the number of labels for each input is fixed, we simply retrieve the top-k closest candidate labels to the input as the final predictions. Otherwise, all candidate labels with similarity above a certain threshold are given as predictions. Note that UNIST is not restricted to a pre-defined label set, as any textual label in natural language can be encoded by UNIST into its semantic representation and ranked accordingly during inference.
2
In this section, we describe our model architecture in Section 3.1 and our proposed temporal representations in Section 3.2.Our model is depicted in Figure 1 and consists of 3 components: (1) utterance encoder, (2) context encoder, and (3) output network. We next describe each component in detail.Utterance Encoder We use a bi-directional LSTM (Hochreiter and Schmidhuber, 1997) and pre-trained word embeddings to encode the current utterance into an utterance embedding. For pre-trained word embeddings, we use FastText (Bojanowski et al., 2017) concatenated with Elmo (Peters et al., 2018) trained on an internal SLU dataset.Context Encoder Context encoder is a hierarchical model that consists of a turn encoder and a sequence encoder. For each previous turn, turn encoder encodes 3 types of features: (1) utterance text, (2) hypothesized domain, and (3) hypothesized domain-specific intent, which are also used in Naik et al. (2018) . Utterance text is encoded using the same model architecture as in utterance encoder. Hypothesized domain and intent are first represented using one-hot encoding then projected into embeddings. We stack the 3 representations, perform max-pooling then feed into a 2 layer fully connected neural network to produce a turn representation. Temporal representations (Section 3.2) are then applied to indicate their closeness. Finally, sequence encoder encodes the sequence of temporal encoded turn representations into a single context embedding that is fed to the output network.Output Network Output network concatenates utterance embedding and context embedding as input and feeds into a 2 layer fully-connected network to produce classification logits.Response Time Considerations State-of-theart contextual models encode the entire context and utterance to learn coarse and fine relationships with attention mechanisms Heck et al., 2020) . Since commercial voice assistants need to provide immediate responses to users, encoding context and utterance is computationally expensive such that the system would not respond in-time at industrial-scale (Kleppmann, 2017) . We separate context encoder from utterance encoder so that we can encode context when user is idle or when the voice assistant is responding. Moreover, the hierarchical design allows us to cache previously encoded turn representations to avoid re-computation.In this section, we present the temporal representations used in our experiments. For the following, given previous turn t and its turn features h t (c) from turn encoder, we denote its wall-clock second difference and turn order offset asd ∆sec , d ∆turn .For operators, we denote and ⊕ as element-wise multiplication and summation.Time Mask (TM) (Conway and Mathias, 2019) feeds d ∆sec into a 2 layer network and sigmoid function to produce a masking vector m ∆sec that is multiplied with the context feature h T c , and show that important features occur in certain time spans. The equations are given as follows.EQUATIONEQUATIONEQUATIONHere W s1 , W s2 , b s1 , b s2 are weight matrices and bias vectors, φ and σ are ReLU activation and sigmoid functions, and h t T M (c) denotes the time masked features. We also considered binning second differences instead of working with d ∆sec . However, we find that binning significantly underperforms compared to the latter.Turn Embedding (TE) We first represent d ∆turn as a one-hot encoding then project it into a fixed-size embedding e ∆turn . We then sum the turn embedding with context features as in positional encoding in Transformer (Vaswani et al., 2017) .EQUATIONIt is natural and intuitive to assume that a closer context is more likely to correlate with the current user request. Assuming we are given user requests "Where is Cambridge?" and "How is the weather there?". It is more likely that the user is inquiring about weather in Cambridge if the second request immediately follows the first, compared to the case where these two requests are hours or multiple turns apart. For a proper comprehension of closeness, both wall-clock and turn order information are needed, as having the same wall-clock difference would require us to know the turn order difference, and vice versa. Here we propose 3 representations that combines the two information based on different hypotheses.Turn Embedding over Time Mask (TEoTM) provides turn order information on top of seconds. We do so by first masking the context features using Time Mask then mark the relative order with Turn Embedding. This variant assumes that the past context is important despite the fact that they might be distant in seconds. 4 ResultsIn this section, we first describe our experimental setup in Section 4.1, present our main results in Section 4.2, followed by our analyses in Section 4.3.
2
In this section, we give a detailed description of our self-collaborative denoising learning framework, which consists of two interactive teacher-student networks to address both the incomplete and inaccurate annotation issues. As illustrated in Figure 2, each teacher-student network contributes to an inner loop for self denoising and the outer loop between two networks is a collaborative denoising scheme. These two procedures can be optimized in a mutually-beneficial manner, thus improving the performance of the NER system.It is widely known that deep neural networks have high capacity for memorization (Arpit et al., 2017) . When noisy labels become prominent, deep neural NER models inevitably overfit noisy labeled data, resulting in poor performance. The purpose of self denoising learning is to select reliable labels to reduce the negative influence of noisy annotations. To achieve this end, self denoising learning involves a teacher-student network, where the teacher first generates pseudo labels to participate in labeled token selection, then the student is optimized via back-propagation based on selected tokens, and finally the teacher is updated by gradually shifting the weights of the student in continuous training with exponential moving average (EMA). We take two neural NER models with the same architecture as the teacher and student respectively.This subsection illustrates our labeled token selection strategy based on the consistency and high confidence predictions.Consistency Predictions. It has been observed that the model's predictions of wrongly labeled instances fluctuate drastically in previous studies . A mislabeled instance will be supervised by both its wrong label and similar instances. For example, Amazon is wrongly annotated as organization in Figure 1 . The wrong label organization pushes the model to fit this supervision signal while other clean tokens with similar context will encourage the model to predict it as location. Therefore, we can take advantage of this property to separate clean tokens from noisy ones.Based on above analysis, how to quantify the fluctuation becomes a key issue. One straightforward solution is to integrate predictions from different training iterations but with more time-space complexity. Thanks to the widespread concern of EMA, we use it to update the teacher's parameters. 2The interplay between two teacher-student networks is an outer loop (i.e., collaborative denoising): the pseudo labels are applied to update the noisy labels of the peer network periodically.In this way, the teacher can be viewed as the temporal ensembling of the student models in different training steps and then its prediction will be the ensemble of predictions from past iterations. Therefore, the pseudo labels predicted by the teacher can quantify the fluctuation of noisy labels naturally. Subsequently, we devise the first token selection strategy based on the fluctuation of noisy labels to identify the correctly labeled tokens (X i ,Ȳ i ) via the consistency between noisy labels and predicted pseudo labels, denoted as:EQUATIONwhere y j ∈ Y i is the noisy label of the j-th token in the i-th sentence andỹ j is the pseudo label predicted by the teacher θ t .High Confidence Predictions. As studied in previous works (Bengio et al., 2009; Arpit et al., 2017) , hard samples can not be learnt effectively at first, thus predictions of those mislabeled hard samples may not fluctuate and then they are mistakenly believed to be reliable. To alleviate this issue, we propose the second selection strategy to pick tokens with high confidence predictions, as formulated in Equation 2, wherep j is the label distribution of the j-th token predicted by the teacher, δ denotes the confidence threshold.(X i ,Ȳ i ) HCP = {(x j , y j ) | max(p j ) ≥ δ} (2)4.1.2 Optimization Loss Function of the Student. Standard supervised NER methods are fitting the outputs of a model to hard labels (i.e, one-hot vectors) to optimize the parameters. However, when the model is trained with tokens and mismatched hard labels, wrong information is being provided to the model. Compared with hard labels, the supervision with soft labels is more robust to the noise because it carries the uncertainty of the predicted results. Therefore, we modify the standard cross entropy loss into a soft label form defined as:EQUATIONT i = (X i ,Ȳ i ) CP ∩ (X i ,Ȳ i ) HCP (4)where p i j,c is the probability of the j-th token with the c-th class in the i-th sentence predicted by the student andp i j,c is from the teacher. T i includes the tokens in the i-th sentence meeting the consistency and high confidence selection strategies simultaneously. I is the indicator function, I i,j = 1 when the j-th token is in T i , otherwise I i,j is 0.Then the parameters of the student model can be updated via back-propagation as follows:EQUATIONUpdate of the Teacher. Different from the optimization of the student model, we apply EMA to gradually update the parameters of the teacher, as shown in Equation 6, where α denotes the smoothing coefficient.EQUATIONAlthough the clean token selection strategies indeed alleviate noisy annotations, they also suffer from unreliable token choice which misguides the model into generating biased predictions. As formulated in Equation 7, the update of the teacher θ i t in i-th iteration can be converted into the form of back-propagation (derivations in Appendix A.1):θ i t = θ i−1 t − γ(1 − α) i−1 j=0 α i−1−j ∂L ∂θ j s (7)where γ is the learning rate and (1 − α) is a small number because α is generally assigned a value close to 1 (e.g., 0.995), equivalent to multiplying a small coefficient on the weighted sum of student's past gradients. Therefore, with the conservative and ensemble property, the application of EMA has largely mitigated the bias. As a result, the teacher tends to generate more reliable pseudo labels, which can be used as new supervision signals in the collaborative denoising phase.Based on the devised clean token selection strategy in self denoising learning, the teacher-student network can utilize the correctly labeled tokens in an ideal situation to alleviate the negative effect of label noise. However, just filtering unreliable labeled tokens will inevitably lose useful information in training set since there is no opportunity for the wrongly labeled tokens to be corrected and explored. Intuitively, if we can change the wrong label to the correct one, it will be transformed into a useful training instance. Inspired by some co-training paradigms (Han et al., 2018; Yu et al., 2019; Wei et al., 2020) , we propose the collaborative denoising learning to update noisy labels mutually for mining more useful information from dataset by deploying two teacherstudent networks with different architecture. As stated in (Bengio, 2014) , a human brain can learn more effectively if guided by the signals produced by other humans. Similarly, the pseudo labels predicted by the teacher are applied to update the noisy labels of the peer teacher-student network periodically since two teacher-student networks have different learning abilities based on different initial conditions and network structures. With this outer loop, the noisy labels can be improved continuously and the training set can be fully explored.In this subsection, we introduce the overall procedure of our SCDL framework. Algorithm 1 gives Algorithm 1 Training Procedure of SCDLInput: Training corpus D = {(Xi, Yi)} M i=1with noisy labels Parameter: Two network parameters θt 1 , θs 1 , θt 2 , and θs 2 Output: The best model 1: Pre-training two models θ1, θ2 with D. Pre-Training. 2: θt 1 ← θ1, θs 1 ← θ1, θt 2 ← θ2, θs 2 ← θ2, step ← 0. 3: Initialize noisy labels: YI ← Y, YII ← Y . 4: while not reach max training epochs do 5:Get a batch (X (b) , YEQUATIONII ) from D, step ← step + 1.Self Denoising Learning.Get pseudo-labels via the teacher θt 1 , θt 2 : (b) ; θt 2 ).Y (b) I ← f (X (b) ; θt 1 ), Y (b) II ← f (XGet clean tokens:T (b) I ← TokenSelection(Y (b) I ,Ỹ (b) I ), T (b) II ← TokenSelection(Y (b) II ,Ỹ (b) II ). 8:Update the student θs 1 and θs 2 by Eq. 3 and Eq. 5. 9:Update the teacher θt 1 and θt 2 by Eq. 6. 10:if step mod U pdate_Cycle = 0 then 11:Update noisy labels mutually:Collaborative Denoising Learning. YI = {Yi ← f (Xi; θt 2 )} M i=1 , YII = {Yi ← f (Xi; θt 1 )} M i=1 . 12:end if 13: end while 14: Evaluate models θt 1 , θs 1 , θt 2 , θs 2 on Dev set. 15: return The best model θ ∈ {θt 1 , θs 1 , θt 2 , θs 2 } the pseudocode. To summarize, the training process of SCDL can be divided into three procedures:(1) Pre-Training with Noisy Labels. We warm up two NER models θ 1 and θ 2 on the noisy labels to obtain a better initialization, and then duplicate the parameters θ for both the teacher θ t and the student θ s (i.e., θ t 1 = θ s 1 = θ 1 , θ t 2 = θ s 2 = θ 2 ). The training objective function in this stage is the cross entropy loss with the following form:L(θ) = − 1 MN M i=1 N j=1 y i j log(p(y i j |X i ; θ)) (8)where y i j means the j-th token label of the i-th sentence in the noisy training corpus and p(y i j |X i ; θ) denotes its probability produced by model θ. M and N are the size of training corpus and the length of sentence respectively. (2) Self Denoising Learning. In this stage, we can select correctly labeled tokens to train the two teacher-student networks respectively. (3) Collaborative Denoising Learning. Self denoising can only utilize correct annotations and this phase will update noisy labels mutually to relabel tokens for two teacher-student networks. The initial noisy labels of two networks comes from distant supervision. The second and third phase are conducted alternately, which will promote each
2
In this section, we describe task-guided pretraining and selective masking strategy in detail. For convenience, we denote general unsupervised data, in-domain unsupervised data, downstream supervised data as D General , D Domain and D Task . They generally contain about 1000M words, 10M words, and 10K words respectively.As shown in Figure 1 , our overall training framework consists of three stages:General pre-training (GenePT) is identical to the pre-training of BERT (Devlin et al., 2019) . We randomly mask 15% tokens of D General and train the model to reconstruct the original text.Task-guided pre-training (TaskPT) trains the model on the mid-scale D Domain with selective masking to efficiently learn domain-specific and task-specific language patterns. In this stage, we apply a selective masking strategy to focus on masking the important tokens and then train the model to reconstruct the input. The details of selective masking are introduced in Section 2.2.Fine-tuning is to adapt the model to the downstream task. This stage is identical to the finetuning of the conventional PLMs.Since TaskPT enables the model to efficiently learn the domain-specific and task-specific patterns, it is unnecessary to fully train the model in the stage of GenePT. Hence, our overall pre-training time cost of the two pre-training stages can be much smaller than those of conventional PLMs.In our TaskPT, we select important tokens of D Task by their impacts on the classification results. However, this method relies on the supervised labels of D Task . To selectively mask on mid-scale unlabeled in-domain data D Domain , we adopt a neural model to learn the implicit scoring function from the selection results on D Task and use the model to find important tokens of D Domain .We propose a simple method to find important tokens of D Task . Given the n-token input sequence s = (w 1 , w 2 , . . . , w n ), we use an auxiliary sequence buffer s to help evaluating these tokens one by one. At time step 0, s is initialized to empty. Then, we sequentially add each token w i to s and calculate the task-specific score of w i , which is denoted by S(w i ). If the score is lower than a threshold δ, we regard w i as an important token. Note that we will remove previous important tokens from s to make sure the score is not influenced by previous important tokens.Assume the buffer at the time step i − 1 is s i−1 . We define the token w i 's score as the difference of classification confidences between the original input sequence s and the buffer after adding w i , which is denoted by s i−1 w i :EQUATIONwhere y t is the target classification label of the input s and P (y t | * ) is the classification confidence computed by a PLM fine-tuned on the task. Note that the PLM used here is the model with GenePT introduced in Section 2.1, not a fully pre-trained PLM. In experiments, we set δ = 0.05. The important token criterion S(w i ) < δ means after adding w i , the fine-tuned PLM can correctly classify the incomplete sequence buffer with a close confidence to the complete sequence.For D Domain , text classification labels needed for computing P (y t | * ) are unavailable to perform the method stated above.To find and mask important tokens of D Domain , we apply the above method to D Task to generate a small scale of data where important tokens are annotated. Then we fine-tune a PLM on the annotated data to learn the implicit rules for selecting the important tokens of D Domain . The PLM used here is also the model with GenePT. The fine-tuning task here is a binary classification to classify whether a token is importent or not. With this fine-tuned PLM as a scoring function, we can efficiently score each token of D Domain without labels and select the important tokens to be masked. After masking the important tokens, D Domain can be used as the training corpus for our task-guided pre-training.
2
Here we outline the details of our approach used to generate the final submission. The opensource implementation of our training and evaluation pipeline has been released in public domain. 1Typological features were preprocessed to find likely associations between genetic and areal prop erties of the language. For each typological feature and value from a set of its values we com puted the following probability estimates:EQUATIONEQUATIONily" and "genus" in equations 1 and 2 were as given in the data, and "area" in equation 3 com prised all languages within a 2,500 kilometer ra dius around the target language's latitude and lon gitude computed using the Haversine formula (Ro busto, 1957). 2 1 Available at https://github.com/google-research/ google-research/tree/master/constrained_language_ typology.2 A reviewer noted that the hard limit of 2,500 kilometers seemed arbitrary and wondered why we do not weight "neigh In addition we computed a set of implicational universals (Greenberg, 1963) . For each feature value , for feature , we compute the probabil ity of of given , and each , pair from the set of known featurevalue pairs in the dataEQUATIONFor each of the genetic (family and genus), areal (neighborhood) and universal implication types of associations we kept a separate table, where, for all the known features we stored• the feature ,• the total number of samples in the data of the given type with , denoted ( ),• the value with the highest estimated proba bility per the above equations, denoted maj ,• the prior maj corresponding to maj .Examples of the most likely associations computed above are shown in the first three rows of Ta ble 1. For the Niger-Congo language family, the most likely value for the feature Green_and_Blue (observed 9 times) is 3 Black/green/blue, with the corresponding prior 0.667. For the Bantoid language genus, the feature Green_and_Blue was observed twice, both times with the same value 3 Black/green/blue. The areal exam ple corresponds to the neighborhood of Yoruba,boring languages according to closeness and use weighted clustering instead." We agree that in principle more sophisti cated approaches would be nice, but one should bear in mind that the geographic centroids for languages provided in the data are at best crude, and so doing anything more sophis ticated seemed to us to be crude. Also, to do this properly, distance is really not sufficient: one would also need to ac count for the presence of possible barriers to contact, includ ing impassable mountain ranges, seas, and hostile neighbors, elements that would be hard to model. ( , )=(8.0, 4.3) (where denotes latitude), for which 13 Green_and_Blue features were observed, with the most likely value corresponding to 3 Black/green/blue with prior 0.692. In addition for the implicational features, we stored the prior probability of given the con ditional feature from equation 4, and the to tal count for . As an example of a weak im plicational preference consider the example given in the fourth row of Table 1 , which means that if a language has 2 Red/yellow for the fea ture Red_and_Yellow, then there is a slight pref erence ( =0.583, estimated on the basis of 12 examples) for having 3 Black/green/blue as the value for Green_and_Blue. In other words, 2 Red/yellow ⊃ 3 Black/green/blue. There were 54 cases of Green_and_Blue in the training data, for which the estimated a priori probability for 3 Black/green/blue is 0.148. Table 2 shows the overall sizes of the association tables for the genetic, areal and implicational types described above for the different partitions of the shared task data.For each typological feature we train a separate fea ture estimator, resulting in 185 estimators overall. When training an individual estimator, we repre sent each language in the training and development set as a sparse feature vector. Likewise, the lan guages whose features need to be predicted at test time are also represented similarly.The makeup of individual language vector for a given typological feature and a language is shown in Table 3 . The vector consists of dense and sparse subvectors; the components shown in the first four rows of the table are mostly dense. The first subvector consists of language's latitude and longitude coordinate, represented as two numeric features exactly the same as given in the shared task data. There is no particular rationale for this choice other than we previously found that for a different task (speech synthesis) choosing al ternative representations for the language location (e.g., distances to all other languages in the train ing set) in the input features did not significantly improve the results (Gutkin and Sproat, 2017) . The next three subvectors representing genus, family and area, are structured similarly using the three components used in association tables de scribed previously: the majority value maj ( ) rep resented as a categorical feature, the prior corre sponding to this value maj ( ) and the feature fre quency ( ), both represented as numeric features. For these three subvectors, the missing values are represented by the threetuple ( ∅ , 10 −6 , 0), where ∅ denotes a global dummy typological feature value. 3 The first four subvectors described above are fol lowed by multiple subvectors representing individ ual universal implications, as shown in the fifth row of Table 3 . Each implicational, describing the dependence of feature on , is represented as a fivetuple whose elements are stored in the associ ations table for implicational universals: The most likely value maj ( ) of corresponding to the highest conditional probability ( | , , ) (inter preted as probability of taking value given that is ), the total count ( , , ) of when is , the prior ( | ) and the total count ( ) for when is . The missing implicational is rep resented as a fivetuple ( ∅ , 10 −6 , 0, 10 −6 , 0). As mentioned above, the implicational portion of the language vector is very sparse because, for a fea ture ∈ the language vector belongs to, one needs to compute all its correlations to other fea tures ∈ , where is the set of all 182 known features. Since the typological database is very sparse, most of the observed correlations between and for a given language are poorly instanti ated.Value v maj family (f ) Categorical Prior p maj family (f ) Numeric Count c family (f ) Numeric Value v maj area (f ) Categorical Prior p maj area (f ) Numeric Area 3 Count c area (f ) Numeric • • • • • • • • • • • • Value v maj (f i ) Categorical Prob. p(v i |f, v, f i ) Numeric Count c(f, v, f i ) Numeric Prior p(v i |f i ) Numeric Implicational i 5 Count c(f i ) Numeric • • • • • • • • • • • •The categorical features in the language vector are represented using a onehot encoding and the numeric features are scaled to zero mean and unit variance. For probability components, prior to nu meric feature scaling, the probability features are transformed into log domain.The overall representation results in the lan guage vectors with a rather high dimensional ity.For example, the language vectors for the Order_of_Subject,_Object_and_Verb fea ture have the dimension of 4111. As we shall see below from the shared task details, this representa tion may already be too specific given the training set which only contains 1,125 data points. This observation also explains our choice of represent ing features and implicationals with the attributes associated with their most likely majority values rather than all the values for that feature observed in the data, as suggested by a reviewer -doing so will dramatically increase the dimension of input feature space even further and render out approach completely intractable.
2
Our aim is to determine predictive features for the detection of brands. Rather than employing some supervised learner that requires manually labeled training data, we want to convert these features directly into a classifier without costly labeled data. We conceive this task as a ranking task. The reason for using a ranking is that our features can be translated into a ranking score in a very straightforward manner. For the evaluation, we do not have to determine some empirical threshold separating the category brand from the category type. Instead, the evaluation measures we employ for ranking implicitly assume highly ranked instances as brands and instances ranked at the bottom as types.For the ranking task, we employ the processing pipeline as illustrated in Figure 1 . Most of our features are designed in such a way that they assign a ranking score to each of our food items by counting how often a feature is observed with a food item; that is why we call these features ranking features. The resulting ranking should assign high scores to food brands and low scores to food types. If we want to combine several features into one ranking, we simply average for each food item the different ranking scores of the individual ranking features. This is possible since they have the same range [0; 1]. We obtain such range by normalizing the number of occurrences of a feature with a particular food item by the total number of occurrences of that food item. The combination by averaging is unbiased as it treats all features equally.We also introduce a reset feature which is applied on top of an existing ranking provided by ranking features. A reset feature is a negative feature in the sense that it is usually a reliable cue that a food item is not a brand. If it fires for a particular food item, then its ranking score is reset to 0.Finally, we add bootstrapping features. These features produce an output similar to the ranking features (i.e. another ranking). However, unlike the ranking features, the bootstrapping features produce their output based on a weakly-supervised method which requires some labeled input. Rather than manually providing that input, we derive it from the combined output that is provided by the ranking and reset features. We restrict ourselves to instances with a high-confidence prediction, which translates to the top and bottom end of a ranking. (Since the instances are not manually labeled, of course, not every label assignment will be correct. We hope, however, that by restricting to instances with a high-confidence prediction, we can reduce the amount of errors to a minimum.) The output of a bootstrapping feature is combined with the set of ranking features to a new ranking onto which again a reset feature is applied. Table 4 shows which feature (each will be discussed below) belongs to which of the above feature types (i.e. ranking, reset or bootstrapping features). Most features (i.e. all except WIKI) are extracted from our domain-specific corpus introduced in §2.Since we established that brands tend to be shorter than types ( §3), we add one feature that ranks each food item according to its number of characters.Brands can be considered a special kind of named entities. We apply a part-of-speech tagger to count how often a food item has been tagged as a proper noun. We decided against a named-entity recognizer as it usually only recognizes persons, locations and organizations, while part-of-speech taggers employ a general tag for all proper nouns (that may go well beyond the three afore-mentioned common types). We use a statistical tagger, i.e. TreeTagger (Schmid, 1994) , that also employs features below the word level. As many of our food items will be unknown words, a character-level analysis may still be able to make useful predictions.We also count the number of other named entities that co-occur with the target food brand within the same sentence. We are only interested in organizations; an organization co-occurring with a brand is likely to be the company producing that brand (e.g. He loves Kellogg's company frosties brand .) For this feature, we rely on the output of a named-entity recognizer for German (Chrupała and Klakow, 2010) .Once a product has established itself on the market for a substantial amount of time, many companies introduce variants of their brand to further consolidate their market position. The purpose of this diversification is to appeal to customers with special needs. A typical variant of food brands are light products. In many cases, the names of variants consist of the name of the original brand with some prefix or suffix indicating the particular type of variant (e.g. mini babybel or philadelphia light). We manually compiled 11 affixes and check for each food item how often it is accompanied by one of them.Presumably, brands are more likely to be mentioned in the context of commercial transaction events than types. Therefore, we created a list of words that indicate these types of events. The list was created ad hoc. We used external resources, such as FrameNet (Baker et al., 1998) or GermaNet (Hamp and Feldweg, 1997 ) (the German version of WordNet (Miller et al., 1990) ), and made no attempt to tune that list to our domain-specific food corpus. The final list (85 cues in total) comprises: verbs (and deverbal nouns) that convey the event of a commercial transaction (e.g. buy, purchase or sell), persons involved in a commercial transaction (e.g. customer or shop assistant), means of purchase (e.g. money, credit card or bill), places of purchase (e.g. supermarket or shop) and judgment of price (e.g. cheap or expensive).Even though many mentions of brands are similar to those of types, there exist some particular contexts that are mostly observed with brands. If the food item to be classified often occurs as a modifier of another food item, then the target item is likely to be some brand. This is due to the fact that many brands are often mentioned in combination with the food type that they represent, e.g. volvic mineral water, nutella chocolate spread.Instead of appearing as a modifier ( §4.6), a brand may also be embedded in some prepositional phrase that has a similar meaning, e.g. We only buy the chocolate spread [by nutella] P P .We also employ some semi-supervised graph clustering method in order to assign semantic types to food items as introduced in Wiegand et al. (2014) . The underlying data structure is a food graph that is generated automatically from our domain-specific corpus where nodes represent food items and edge weights Table 5 : Proportion of categories in the entire food vocabulary (General) and among brands (Brands).represent the similarity between different items. The weights are computed based on the frequency of co-occurrence within a similarity pattern (e.g. X instead of Y). Food items that cluster with each other in such a graph (i.e. food items that often co-occur in a similarity pattern) are most likely to belong to the same class. For the detection of brands, we examine two different types of food categorization. We always use the same clustering method (Wiegand et al., 2014) and the same graph. Depending on the specific type of categorization, we only change the seeds to fit the categories to be induced.The first categorization we consider is the categorization of food items according to the Food Guide Pyramid (U.S. Department of Agriculture, 1992) as examined in Wiegand et al. (2014) . We observed that food brands are not equally distributed throughout the entire range of food items. There is a notable bias of food brands towards beverages (mostly soft drinks and alcoholic drinks), sweets, snack mixes, dairy products and fat. Other categories, e.g. nuts, vegetables or meat, hardly contain brands. 4 The category inventory and the proportion among types and brands are displayed in Table 5 . We use the category information as a negative feature, that is, we re-set the ranking score to 0 if the category of the food item is either MEAT, SPICE, VEGE, STARCH, FRUIT, GRAIN or EGG. In order to obtain a category assignment to our food vocabulary, we re-run the best configuration from Wiegand et al. (2014) including the choice of category seeds. We just extend the graph that formerly only contained food types by nodes representing brands. We use no manually-compiled knowledge regarding food brands. Even though the seed food items are exclusively food types, we hope to be also able to make inferences regarding food brands. This is illustrated in Figure 2 (a): The brand mars can be grouped with food types that are sweets, therefore, we conclude that mars is also some sweet. (Brands can be grouped with food types of their food category, since food brands are often used as if they were types ( §1)). Since sweets are plausible candidates for brands (Table 5) , mars is likely to be some brand.We think that such bias of brands towards certain subcategories is also present in other domains. For example, in the electronic domain laptops will have a much larger variety of brands than network cables. Similarly, in the fashion domain there exist much more shoe brands than sock brands.We also apply graph clustering directly for the separation of brands from types, i.e. we assign some brand and type seeds and then run graph-based clustering (Figure 2(b) ). In order to combine the output of this clustering with that of the previous methods, we interpret the confidence of the output as a ranking score. As we pursue an unsupervised approach, we do not manually label the seeds but rely on the output of a ranker using a combination of above features (Figure 1) . Instances at the top of the ranking are considered brand seeds, while instances at the bottom are considered type seeds. For many information extraction tasks, the usage of collaboratively-edited resources is increasingly becoming popular. One of the largest resources of that type is Wikipedia. For our vocabulary of food items, we could match 57% of the food brands and 53% of the food types with a Wikipedia article.Even though Wikipedia may hold some useful information for the detection of brands, this information is not readily available in a structured format, such as infoboxes. This is illustrated by (3)-(5) which display the first sentence of three Wikipedia articles, where (3) and (4) are food brands and (5) is a food type. There is some thematic overlap across the two categories (e.g. (4) and (5) describe the ingredients of the food item). However, if one also considers the entire articles, some notable topical differences between brands and types become obvious. The articles of food brands typically focus on commercial aspects (i.e. market situation and product history) while articles of food types describe the actual food item (e.g. by distinguishing it from other food items or naming its origin). Therefore, a binary topic classification based on the entire document should be a suitable approach. In the light of the diversified language employed for articles on brands (cp. (3)-(4)), we consider a bag-of-words classifier more effective than applying some textual patterns on those texts.(3) BRAND: Twix is a chocolate bar made by Mars, Inc.(4) BRAND: Smarties is a brand under which Nestlé produces colour-varied sugar-coated chocolate lentils.(5) TYPE: Milk chocolate is a type of chocolate made from cocoa produce (cocoa bean, cocoa butter), sugar, milk or dairy products.Similar to GRAPH brand ( §4.8.2), we harness Wikipedia via a bootstrapping method. We generate a labeled training set of Wikipedia articles representing brands and types using the combined output of the ranking features (+ reset feature). We then train a supervised classifier on these data and classify all articles representing food items of our food vocabulary. We use the output score of the classifier for the article of each food item (which amounts to some confidence score) and thus obtain a ranking score. For those food items for which no Wikipedia entry exists, we produce a score of 0.While GRAPH brand ( §4.8.2) determines similar food items by means of highly weighted edges in a similarity graph (that represent the frequency of co-occurrences with a similarity pattern), we also examine whether distributional similarity can be harnessed for the same purpose. We represent each food item as a vector, where the vector components encode the frequency of words that co-occur with mentions of the food item in a fixed window of 5 words (in our domain-specific corpus). Similar to GRAPH brand ( §4.8.2) and WIKI ( §4.9), we consider the n highest and m lowest ranked food items provided by ranking features (+ reset feature) as labeled brand and type instances for a supervised classifier. For testing, we apply this classifier on each food item in our vocabulary, or more precisely, its vector representation. Thus we obtain another ranking score (again, the output amounts to some confidence score).Feature P@10 P@50 P@100 P@200 AP P@10 P@50 P@100 P@200 AP Table 7 : Performance of food categorization according to the Food Guide Pyramid (auxiliary classification).RANDOM
2
We first discuss word embeddings in Section 4.1, before we move on to a formal description of the CRF architecture in Section 4.2.Word embeddings are continuous vector representations induced from unlabeled input text of arbitrary length. Each dimension of the word embedding represents a latent feature of the word. Intuitively, this kind of meaning representation captures useful properties of the word, both semantically and syntactically .Word embeddings are typically learned using neural networks (Collobert and Weston, 2008) or clustering as underlying predictive model. Turian et al. (2010) provide a comparison of multiple approaches. Recently, proposed a simple and computationally efficient way to learn word embeddings. In the skip-gram model architecture, the hidden layer is replaced by a shared pro-jection layer, and a window of size c surrounding words w t−c , .., w t−1 , w t+1 , .., w t+c from word w t is predicted. The training objective is to learn word embeddings which are good predictors for the surrounding words. This is done by maximizing the average log probability over the data:1 T T t=1 −c≤j≤c,j =0 log p(w t+j |w t )In order to avoid a costly computation proportional to the size of the vocabulary, p(w t+j |w t ) is computed using the hierarchical softmax function as an approximation of the softmax functions. Increasing the window size c can improve accuracy at the expense of training time, since it results in more training examples.The Conditional Random Fields (CRF) model is a state-of-the-art sequence labeling method first introduced by Lafferty et al. (2001) .CRFs are a undirected, graphical model trained to maximize a conditional probability distribution given a set of features. The most common graphical structure used with CRF is linear chain. Let Y = (y 1 , ..., y T ) denote a sequence of labels and X = (x 1 , ..., x T ) denote the corresponding observations sequence. The sequence of labels is the concept we wish to predict (e.g. target phrases, namedentity, POS, etc.). The observations are the words in the input string. Given a linear chain CRF, the conditional probability p(Y |X) is computed as follows:p(Y |X) = 1 Z X T t=1 exp K k=1 λ k f k (y t , y t−1 , x t )Z X is a normalizing constant such that all the terms normalize to one, f k is a feature function, and λ k is a feature weight. CRF offers an advantage over generative approaches by relaxing the conditional independence assumption and allowing for arbitrary features in the observation.For all our experiments we use CRFsuite 1 , an implementation of CRF for labeling sequential data provided by Okazaki (2007) . We choose an appropriate learning algorithm based on accuracy on the development set and use Limited-memory BFGS optimization (Nocedal, 1980) .
2
We propose to treat the task of editing the memory of a neural model as a learning problem. Instead of defining a handcrafted algorithm to compute the new parameters θ , we learn a KNOWL-EDGEEDITOR: a model that predicts θ conditioned on an atomic fact that we want to modify. Concretely, KNOWLEDGEEDITOR is a hypernetwork (Ha et al., 2017 )-i.e., a neural network that predicts the parameters of another network. Since the task requires every other prediction to stay the same-except the one we desire to change-we cast the learning task as a constrained optimization problem.Optimization For an input x, changing the prediction of a model f (•; θ) to a corresponds to minimizing the loss L(θ; x, a) incurred when a is the target. Preserving the rest of the knowledge corresponds to constraining the updated parameter θ such that model outputs f (•; θ ) do not change for x ∈ O x . Our editor g is a neural network parameterized by φ which we choose by optimising the following objective for each data-point x, y, a ∈ D:EQUATIONwhere P x is the set of semantically equivalent inputs to x (for convenience we assume it contains at least x), θ = θ + g(x, y, a; φ), C is a constraint on the update, and the margin m ∈ R >0 is a hyperparameter. The constraint is used to express our desire to preserve model outputs unchanged for x = x. Note that only x, but not the rest of P x , are provided as input to the editor, as these will not be available at test time. In our models, f (x; θ) parameterizes a discrete distribution p Y |X over the output sample space Y, hence we choose to constrain updates in terms of sums of Kullback-Leibler (KL) divergences from the updated model to the original one:EQUATIONThe constraint pushes the updated model to predict output distributions identical to the original one for all x = x. An alternative constraint we could employ is an L p norm over the parameter updates such that g is optimized to make a minimal update to the original model parameter:C Lp (θ, θ , f ; O x ) = ( i |θ i − θ i | p ) 1/p. This constraint was previously used by Zhu et al. (2020) . However, such a constraint, expressed purely in parameter space and without regards to the model architecture f , does not directly encourage model outputs to be close to original ones in function space (i.e., the two functions to be similar). Neural models are highly non-linear functions, so we do not expect this type of constraint to be effective. This will be empirically demonstrated in Section 6.Tractable approximations Non-linear constrained optimization is generally intractable, thus we employ Lagrangian relaxation (Boyd et al., 2004) instead. The constraint itself poses a computational challenge, as it requires assessing KL for all datapoints in the dataset at each training step. For tractability, we evaluate the constraint approximately via Monte Carlo (MC) sampling (see Appendix A for more details). Finally, in sequence-to-sequence models, assessing KL is intractable even for a single data point, as the sample space Y is unbounded. In such cases we approximate the computation on a subset of the sample space obtained via beam search.Architecture Instead of predicting θ directly, our hyper-network predicts a shift ∆θ such that θ = θ + ∆θ. A naive hyper-network implementation might be over-parameterized, as it requires a quadratic number of parameters with respect to the size of the target network. Thus, we apply a trick similar to Krueger et al. (2017) to make g tractably predict edits for modern large deep neural networks (e.g., BERT). Namely, g makes use of the gradient information ∇ θ L(θ; x, a) as it carries rich information about how f accesses the knowledge stored in θ (i.e., which parameters to update to increase the model likelihood given a). 5 We first encode x, y, a , concatenating the text with special separator and feeding it to a bidirectional-LSTM (Hochreiter and Schmidhuber, 1997) . Then, we feed the last LSTM hidden states to a FFNN that outputs a single vector h that conditions the further computations. To predict the shift for a weight matrix W n×m ∈ θ, we use five FFNNs conditioned on h that predict vectors α, β ∈ R m , γ, δ ∈ R n and a scalar η ∈ R. ThenEQUATIONwhere σ is the Sigmoid function (i.e., x → (1 + exp(−x)) −1 ), andσ indicates the Softmax function (i.e., x → exp(x)/ i exp(x i )). With this formulation, the parameters for the hyper-network φ scale linearly with the size of θ. An interpretation of Equation 3 is that an update ∆W is a gated sum of a scaled gradient of the objective and a bias term. The scale for the gradient and the bias are generated via an outer vector product as it allows for efficient parameterization of a matrix with just three vectors. The gate lets the model keep some parameters unchanged.Margin annealing The margin m is a hyperparameter and therefore fixed. However, i) it is hard to choose since it is task-dependent, and ii) it should be as small as possible. If the margin is too small, however, we risk having a small feasible set, and the model may never converge. To address both issues, we pick some initial value for the margin and anneal it during training conditioned on validation performance: when the model successfully changes > 90% of the predictions, we multiply the margin by 0.8. We stop decreasing the margin once it reaches a desirable small value. The annealing procedure prevents the model from diverging while increasingly tightening the constraint.
2
We develop a probabilistic model for lattices based on hypercube embeddings that can model both positive and negative correlations. Before describing this, we first motivate our choice to abandon OE/POE type cone-based models for this purpose. (x) = Q n i p i (x i ) on R n, where F i , the associated CDF for p i , is monotone increasing.Proof. For any product measure we have Zz x p(z)dz = n Y i Z x i z i p i (z i )dz i = n Y i 1 F i (x i )This is just the area of the unique box corresponding toQ n i [F i (x i ), 1] 2 [0, 1] n, under the uniform measure. This box is unique as a monotone increasing univariate CDF is bijective with (0, 1)cones in R n can be invertibly mapped to boxes of equivalent measure inside the unit hypercube [0, 1] n . These boxes have only half their degrees of freedom, as they have the form [F i (x i ), 1] per dimension, (intuitively, they have one end "stuck at infinity" since the cone integrates to infinity.So W.L.O.G. we can consider two transformed cones x and y corresponding to our Bernoulli variables a and b, andletting F i (x i ) = u i and F i (y i ) = v i , their intersection in the unit hyper- cube is Q n i [max(u i , v i ), 1].Pairing terms in the right-hand product, we havep(a, b) p(a)p(b) = n Y i (1 max(u i , v i )) n Y i (1 u i )(1 v i ) 0since the right contains all the terms of the left and can only grow smaller. This argument is easily modified to the case of the nonnegative orthant,mutatis mutandis.An open question for future work is what nonproduct measures this claim also applies to. Note that some non-product measures, such as multivariate Gaussian, can be transformed into product measures easily (whitening) and the above proof would still apply. It seems probable that some measures, nonlinearly entangled across dimensions, could encode negative correlations in cone volumes. However, it is not generally tractable to integrate high-dimensional cones under arbitrary non-product measures.The above proof gives us intuition about the possible form of a better representation. Cones can be mapped into boxes within the unit hypercube while preserving their measure, and the lack of negative correlation seems to come from the fact that they always have an overly-large intersection due to "pinning" the maximum in each dimension to 1. To remedy this, we propose to learn representations in the space of all boxes (axis-aligned hyperrectangles), gaining back an extra degree of freedom. These representations can be learned with a suitable probability measure in R n , the nonnegative orthant R n + , or directly in the unit hypercube with the uniform measure, which we elect.We associate each concept with 2 vectors, the minimum and maximum value of the box at each dimension. Practically for numerical reasons these are stored as a minimum, a positive offset plus an ✏ term to prevent boxes from becoming too small and underflowing.Let us define our box embeddings as a pair of vectors in [0, 1] n , (x m , x M ), representing the maximum and minimum at each coordinate.Then we can define a partial ordering by inclusion of boxes, and a lattice structure as x^y = ? if x and y disjoint, elsex^y = Y i [max(x m,i , y m,i ), min(x M,i , y M,i )] x _ y = Y i [min(x m,i , y m,i ), max(x M,i , y M,i )]where the meet is the intersecting box, or bottom (the empty set) where no intersection exists, and join is the smallest enclosing box. This lattice, considered on its own terms as a non-probabilistic object, is strictly more general than the order embedding lattice in any dimension, which is proven in Appendix B.However, the finite sizes of all the lattice elements lead to a natural probabilistic interpretation under the uniform measure. Joint and marginal probabilities are given by the volume of the (intersection) box. For concept a with associ-ated box (x m , x M ), probability is simply p(a) = Q n i (x M,ix m,i ) (under the uniform measure). p(?) is of course zero since no probability mass is assigned to the empty set.It remains to show that this representation can represent both positive and negative correlations. Proof. Boxes can clearly model disjointness (exactly 1 correlation if the total volume of the boxes equals 1). Two identical boxes give their concepts exactly correlation 1. The area of the meet is continuous with respect to translations of intersecting boxes, and all other terms in correlation stay constant, so by continuity of the correlation function our model can achieve all possible correlations for a pair of variables.This proof can be extended to boxes in R n with product measures by the previous reduction.Limitations: Note that this model cannot perfectly describe all possible probability distributions or concepts as embedded objects. For example, the complement of a box is not a box. However, queries about complemented variables can be calculated by the Inclusion-Exclusion principle, made more efficient by the fact that all nonnegated terms can be grouped and calculated exactly. We show some toy exact calculations with negated variables in Appendix A. Also, note that in a knowledge graph often true complements are not required -for example mortal and immortal are not actually complements, because the concept color is neither.Additionally, requiring the total probability mass covered by boxes to equal 1, or exactly matching marginal box probabilities while modeling all correlations is a difficult box-packing-type problem and not generally possible. Modeling limitations aside, the union of boxes having mass < 1 can be seen as an open-world assumption on our KB (not all points in space have corresponding concepts, yet).While inference (calculation of pairwise joint, unary marginal, and pairwise conditional probabilities) is quite straightforward by taking intersections of boxes and computing volumes (and their ratios), learning does not appear easy at first glance. While the (sub)gradient of the joint probability is well defined when boxes intersect, it is non-differentiable otherwise. Instead we optimize a lower bound.Clearly p(a _ b) p(a [ b), with equality only when a = b, so this can give us a lower bound:p(a^b) = p(a) + p(b) p(a [ b) p(a) + p(b) p(a _ b)Where probabilities are always given by the volume of the associated box. This lower bound always exists and is differentiable, even when the joint is not. It is guaranteed to be nonpositive except when a and b intersect, in which case the true joint likelihood should be used.While a negative bound on a probability is odd, inspecting the bound we see that its gradient will push the enclosing box to be smaller, while increasing areas of the individual boxes, until they intersect, which is a sensible learning strategy.Since we are working with small probabilities it is advisable to negate this term and maximize the negative logarithm:log(p(a _ b) p(a) p(b))This still has an unbounded gradient as the lower bound approaches 0, so it is also useful to add a constant within the logarithm function to avoid numerical problems.Since the likelihood of the full data is usually intractable to compute as a conjunction of many negations, we optimize binary conditional and unary marginal terms separately by maximum likelihood.In this work, we parametrize the boxes as (min, = max min), with Euclidean projections after gradient steps to keep our parameters in the unit hypercube and maintain the minimum/delta constraints. Now that we have the ability to compute probabilities and (surrogate) gradients for arbitrary marginals in the model, and by extension conditionals, we will see specific examples in the experiments.
2
To study the effect of keywords on readability and comprehensibility of texts on the screen, we conducted an experiment where 62 participants (31 with dyslexia) had to read two texts on a screen, where one of them had the main ideas highlighted using boldface. Readability and comprehensibility were measured via eye-tracking and comprehension tests, respectively. The participants' preferences were gathered via a subjective ratings questionnaire.In the experiment there was one condition, Keywords, with two levels: [+keywords] denotes the condition where main ideas of the text were highlighted in boldface and [−keywords] denotes the condition where the presentation of the text was not modified.The experiments followed a within-subjects design, so every participant contributed to each of the levels of the condition. The order of the conditions was counter-balanced to cancel out sequence effects.When measuring the reading performance of people with dyslexia we need to separate readability 3 from comprehensibility 4 because they are not necessarily related. In the case of dyslexia, texts that might seen not readable for the general population, such as texts with errors, can be better understood by people with dyslexia, and vice versa, people with dyslexia find difficulties with standard texts .To measure readability we consider two dependent variables derived from the eye-tracking data: Reading Time and Fixation Duration. To measure comprehensibility we used a comprehension score as dependent variable.• Fixation Duration. When reading a text, the eye does not move contiguously over the text, but alternates saccades and visual fixations, that is, jumps in short steps and rests on parts of the text. Fixation duration denotes how long the eye rests on a single place of the text. Fixation duration has been shown to be a valid indicator of readability. According to (Rayner and Duffy, 1986; Hyönä and Olson, 1995) , shorter fixations are associated with better readability, while longer fixations can indicate that the processing load is greater. On the other hand, it is not directly proportional to reading time as some people may fixate more often in or near the same piece of text (re-reading). Hence, we used fixation duration average as an objective approximation of readability.• Reading Time. The total time it takes a participant to completely read one text. Shorter reading durations are preferred to longer ones, since faster reading is related to more readable texts (Williams et al., 2003) . Therefore, we use Reading Time, that is, the time it takes a participant to completely read one text, as a measure of readability, in addition to Fixation Duration.• Comprehension Score. To measure text comprehensibility we used inferential items, that is, questions that require a deep understanding of the content of the text. We used multiple-choice questions with three possible choices, one correct, and two wrong. We compute the text comprehension score as the number of correct answers divided by the total number of questions.• Subjective Ratings. In addition, we asked the participants to rate on a five-point Likert scale their personal preferences and perception about how helpful the highlighted keywords were.We had 62 native Spanish speakers, 31 with a confirmed diagnosis of dyslexia. 5 The ages of the participants with dyslexia ranged from 13 to 37, with a mean age of 21.09 years (s = 8.18). The ages of the control group ranged from 13 to 40, with a mean age of 23.03 years (s = 7.10).Regarding the group with dyslexia, three of them were also diagnosed with attention deficit disorder. Fifteen people were studying or already finished university degrees, fourteen were attending school or high school, and two had no higher education. All participants were frequent readers and the level of education was similar for the control group.In this section we describe how we designed the texts and keywords that were used as study material, as well as the comprehension and subjective ratings questionnaires.Base Texts. We picked two similar texts from the Spanish corpus Simplext (Bott and Saggion, 2012) . To meet the comparability requirements among the texts belonging to the same experiment, we adapted the texts maintaining as much as possible the original text. We matched the readability of the texts by making sure that the parameters commonly used to compute readability (Drndarevic and Saggion, 2012), had the same or similar values. Both texts: (e) are accessible news, readable for the general public so they contained no rare or technical words, which present an extra difficulty for people with dyslexia (Rello et al., 2013a ).(f) have the same number of proper names (one per text);The Museo Picasso Málaga includes new works of the artist in its permanent collectionThe Andalusian Minister of Culture, Paulino Plata, presented a new reorganization of the permanent collection of the Picasso Museum that, coinciding with the birth anniversary of the painter, incorporates a wide selection of works by Pablo Picasso provided by the Almine and Bernard Ruiz-Picasso Foundation for Art. Paintings, sculptures and ceramics from different periods and styles compose this set of 43 pieces given for 15 years by the mentioned foundation. The incorporation of these creations assumes, according to the Andalusian Council, a valuable contribution to the permanent collection of the Museum Picasso Málaga. In this way, a visitor can now contemplate paintings and sculptures that, for the first time, are exposed in the gallery. (h) one text has two numerical expressions (Rello et al., 2013b) and the other has two foreign words (Cuetos and Valle, 1988) , both being elements of similar difficulty; and (i) have the same number of highlighted keyphrases.An example of a text used (translation from Spanish 6 ) is given in Figure 1 .Keywords. For creating the keywords we highlighted using boldface the words which contained the main semantic meaning (focus) of the sentence. This focus normally corresponds with the direct object and contains the new and most relevant information of the sentence (Sperber and Wilson, 1986) . We only focused on the main sentences; subordinate or relative clauses were dismissed. For the syntactic analysis of the sentences we used Connexor's Machinese Syntax (Connexor Oy, 2006) , a statistical syntactic parser that employes a functional dependency grammar (Tapanainen and Järvinen, 1997) . We took direct objects parsed by Connexor without correcting the output.Comprehension Questionnaires. For each text we manually create three inferential items. The order of the correct answer was counterbalanced and all questions have similar difficulty. An example question is given in Figure 2 .Subjective Questionnaire. The participants rated how much did the keywords helped their reading, their ease to remember the text, and to which extent would they like to find keywords in texts.Text Presentation. The presentation of the text has an effect on reading speed of people with dyslexia (Kurniawan and Conroy, 2006; Gregor and Newell, 2000) . Therefore, we used a text layout that follows the recommendations of previous research. As font type, we chose Arial, sans serif, as recommended in (Rello and Baeza-Yates, 2013) . The text was left-justified, as recommended by the British Association of Dyslexia (British Dyslexia Association, 2012). Each line did not exceeded 62 characters/column, the font size was 20 point, and the colors used were black font with creme background, 7 as recommended in .The eye-tracker used was the Tobii T50 that has a 17-inch TFT monitor with a resolution of 1024×768 pixels. It was calibrated for each participant and the light focus was always in the same position. The time measurements of the eyetracker have a precision of 0.02 seconds. The dis-tance between the participant and the eye-tracker was constant (approximately 60 cm. or 24 in.) and controlled by using a fixed chair.The sessions were conducted at Universitat Pompeu Fabra in a quiet room and lasted from 20 to 30 minutes. First, we began with a questionnaire to collect demographic information. Then, we conducted the experiment using eye-tracking. The participants were asked to read the texts in silence and to complete the comprehension tests after each text read. Finally, we carried out the subjective ratings questionnaire.
2
The emphasis of this work is on building a joint S&T model based on two different kinds of data sources, labeled and unlabeled data. In essence, this learning problem can be treated as incorporating certain gainful information, e.g., prior knowledge or label constraints, of unlabeled data into the supervised model. The proposed approach employs a transductive graph-based label propagation method to acquire such gainful information, i.e., label distributions from a similarity graph constructed over labeled and unlabeled data. Then, the derived label distributions are injected as virtual evidences for guiding the learning of CRFs.Algorithm 1 semi-supervised joint S&T induction Input:D l = {(x i , y i )} l i=1 labeled sentences D u = {(x i )} l+u i=l+1 unlabeled sentences Output:Λ: a set of feature weights 1: Begin 2:{G} = construct graph (D l , D u ) 3: {q 0 } = init labelDist ({G}) 4: {q} = propagate label ({G}, {q 0 }) 5: {Λ} = train crf (D l ∪ D u , {q}) 6: EndThe model induction includes the following steps (see Algorithm 1): firstly, given labeled and unlabeled data, i.e.,D l = {(x i , y i )} l i=1 with l labeled sentences and D u = {(x i )} l+u i=l+1with u unlabeled sentences, a specific similarity graph G representing D l and D u is constructed (construct graph). The vertices (Section 4.1) in the constructed graph consist of all trigrams that occur in labeled and unlabeled sentences, and edge weights between vertices are computed using the cosine distance between pointwise mutual information (PMI) statistics. Afterwards, the estimated label distributions q 0 of vertices in the graph G are randomly initialized (init labelDist). Subsequently, the label propagation procedure (propagate label) is conducted for projecting label distributions q from labeled vertices to the entire graph, using the algorithm of Sparse-Inducing Penalties (Das and Smith, 2012) (Section 4.2). The final step (train crf) of the induction is incorporating the inferred trigram-level label distributions q into CRFs model (Section 4.3).In most graph-based label propagation tasks, the final effect depends heavily on the quality of the graph. Graph construction thus plays a central role in graph-based label propagation (Zhu et al., 2003) . For character-based joint S&T, unlike the unstructured learning problem whose vertices are formed directly by labeled and unlabeled instances, the graph construction is non-trivial. Das and Petrov (2011) mentioned that taking individual characters as the vertices would result in various ambiguities, whereas the similarity measurement is still challenging if vertices corresponding to entire sentences.This study follows the intuitions of graph construction from Subramanya et al. (2010) in which vertices are represented by character trigrams occurring in labeled and unlabeled sentences. Formally, given a set of labeled sentences D l , and unlabeled ones D u , where D {D l , D u }, the goal is to form an undirected weighted graph G = (V, E), where V is defined as the set of vertices which covers all trigrams extracted from D l and D u . Here, V = V l ∪ V u , where V l refers to trigrams that occurs at least once in labeled sentences and V u refers to trigrams that occur only in unlabeled sentences. The edges E ∈ V l × V u , connect all the vertices. This study makes use of a symmetric k-NN graph (k = 5) and the edge weights are measured by a symmetric similarity function (Equation (3)):w i,j = sim(x i , x j ) if j ∈ K(i) or i ∈ K(j) 0 otherwise(3) where K(i) is the set of the k nearest neighbors of x i (|K(i) = k, ∀i|) and sim(x i , x j ) is a similarity measure between two vertices. The similarity is computed based on the co-occurrence statistics over the features in Table 2 . Most features we adopted are selected from those of (Subramanya et al., 2010) . Note that a novel feature in the last row encodes the classes of surrounding character-s, where four types are defined: number, punctuation, alphabetic letter and other. It is especially helpful for the graph to make connections with trigrams that may not have been seen in labeled data but have similar label information. The pointwise mutual information values between the trigrams and each feature instantiation that they have in common are summed to sparse vectors, and their cosine distances are computed as the similarities.FeatureTrigram + Context x 1 x 2 x 3 x 4 x 5 Trigram x 2 x 3 x 4 Left Context x 1 x 2 Right Context x 4 x 5 Center Word x 3 Trigram -Center Word x 2 x 4 Left Word + Right Context x 2 x 4 x 5 Right Word + Left Contextx 1 x 2 x 3 Type of Trigram: number, punctuation, alphabetic letter and other t(x 2 )t(x 3 )t(x 4 )t "x 1 x 2 x 3 x 4 x 5 ", where the trigram is "x 2 x 3 x 4 ".The nature of the similarity graph enforces that the connected trigrams with high weight appearing in different texts should have similar syntax configurations. Thus, the constructed graph is expected to provide additional information that cannot be expressed directly in a sequence model (Subramanya et al., 2010) . One primary benefit of this property is on enriching vocabulary coverage. In other words, the new features of various trigrams only occurring in unlabeled data can be discovered. As the excerpt in Figure 1 shows, the trigram "天津港" (Tianjin port) has no any label information, as it only occurs in unlabeled data, but fortunately its neighborhoods with similar syntax information, e.g., "上海港" (Shanghai port), "广州 港" (Guangzhou port), can assist to infer the correct tag "M NN".In order to induce trigram-level label distributions from the graph constructed by the previous step, a label propagation algorithm, Sparsity-Inducing Penalties, proposed by Das and Smith (2012) , is employed. This algorithm is used because it captures the property of sparsity that only a few labels Figure 1 : An excerpt from the similarity graph over trigrams on labeled and unlabeled data.are typically associated with a given instance. In fact, the sparsity is also a common phenomenon among character-based CWS and POS tagging. The following convex objective is optimized on the similarity graph in this case:EQUATIONwhere r j denotes empirical label distributions of labeled vertices, and q i denotes unnormalized estimate measures in every vertex. The w ik refers to the similarity between the ith trigram and the kth trigram, and N (i) is a set of neighbors of the ith trigram. µ and λ are two hyperparameters whose values are discussed in Section 5. The squaredloss criterion 1 is used to formulate the objective function. The first term in Equation (4) is the seed match loss which penalizes the estimated label distributions q j , if they go too far away from the empirical labeled distributions r j . The second term is the edge smoothness loss that requires q i should be smooth with respect to the graph, such that two vertices connected by an edge with high weight should be assigned similar labels. The final term is a regularizer to incorporate the prior knowledge, e.g., uniform distributions used in (Talukdar et al., 2008; Das and Smith, 2011) . This study applies the squared norm of q to encourage sparsity per vertex. Note that the estimated label distribution q i in Equation (4) is relaxed to be unnormalized, which simplifies the optimization. Thus, the objective function can be optimized by L-BFGS-B (Zhu et al., 1997) , a generic quasi-Newton gradientbased optimizer. The partial derivatives of Equation (4) are computed for each parameter of q and then passed on to the optimizer that updates them such that Equation (4) is maximized.The trigram-level label distributions inferred in the propagation step can be viewed as a kind of valuable "prior knowledge" to regularize the learning on unlabeled data. The final step of the induction is thus to incorporate such prior knowledge into CRFs. Li (2009) generalizes the use of virtual evidence to undirected graphical models and, in particular, to CRFs for incorporating external knowledge. By extending the similar intuition, as illustrated in Figure 2 , we modify the structure of a regular linear-chain CRFs on unlabeled data for smoothing the derived label distributions, where virtual evidences, i.e., q in our case, are donated by {v 1 , v 2 , . . . , v T }, in parallel with the state variables {y 1 , y 2 , . . . , y T }. The modified CRFs model allows us to flexibly define the interaction between estimated state values and virtual evidences by potential functions. Therefore, given labeled and unlabeled data, the learning objective is defined as follows:L(Λ) + l+u i=l+1 E p(y i |x i ,v i ;Λ g ) [log p(y i , v i |x i ; Λ)](5) where the conditional probability in the second term is denoted asEQUATIONThe first term in Equation (5) is the same as Equation (2), which is the traditional CRFs learning objective function on the labeled data. The second term is the expected conditional likelihood of unlabeled data. It is directed to maximize the conditional likelihood of hidden states with the derived label distributions on unlabeled data, i.e., p(y, v|x), where y and v are jointly modeled but the probability is still conditional on x. Here, Z (x; Λ) is the partition function of normalization that is achieved by summing the numerator over both y and v. A virtual evidence feature function of s(y t i , v t i ) with pre-defined weight α is defined to regularize the conditional distributions of states over the derived label distributions. The learning is impacted by the derived label distributions as Equation (7): firstly, if the trigram x t−1 i x t i x t+1 i at current position does have no corresponding derived label distributions (v t i = null), the value of zero is assigned to all state hypotheses so that the posteriors would not affected by the derived information. Secondly, if it does have a derived label distribution, since the virtual evidence in this case is a distribution instead of a specific label, the label probability in the distribution under the current state hypothesis is assigned. This means that the values of state variables are constrained to agree with the derived distributions.s(y t i , v t i ) = q x t−1 i x t i x t+1 i (y t i ) if v t i = null 0 else (7)The second term in Equation (5) can be optimized by using the expectation maximization (EM) algorithm in the same fashion as in the generative approach, following (Li, 2009) . One can iteratively optimize the Q function Q(Λ) =y p(y i |x i ; Λ g ) log p(y i , v i |x i ; Λ), in which Λ g is the model estimated from the previous iteration. Here the gradient of the Q function can be measured by:EQUATIONThe forward-backward algorithm is used to measure p(y t−1 i , y t i |x i , v i ; Λ) and p(y t−1 i , y t i |x i ; Λ). Thus, the objective function Equation (5) is optimized as follows: for the instances i = 1, 2, ..., l, the parameters Λ are learned as the supervised manner; for the instances i = l + 1, l + 2, ..., u + l, in the E-step, the expected value of Q function is computed, based on the current model Λ g . In the M-step, the posteriors are fixed and updated Λ that maximizes Equation (5).
2
We perform a systematic set of experiments for English and Swedish, using different neural constituency parsing architectures in combination with various BERT models to examine how this impacts cross-genre parsing. English is widely used in cross-domain and cross-genre research. Swedish, however, is not as thoroughly examined, yet possess multiple genre treebanks as well as BERT models, making it suitable for our research interests.We use two different neural span-based chart-based parsers, the Berkeley Neural Parser and the SuPar Neural CRF Parser .Berkeley Neural Parser uses a self-encoder and can incorporate BERT models to generate word representations. It uses the last layer embedding of the last subtoken to represent the word. 2 It decouples predicting the optimal representation of a span (i.e. input sequence) from predicting the optimal label, requiring only that the resultant output form a valid tree. This not only removes the underlying grammars found in traditional PCFG parsers, but also direct correlations between a constituent and a label (Fried et al., 2019) . A CKY (Kasami, 1965; Younger, 1967; Cocke and Schwartz, 1970) style inference algorithm is used at test time. Additionally, the parser allows the option of using POS tag prediction to be used as an auxiliary loss task (we use BNP and BNPno to represent with and without the POS loss respectively in our experiments).SuPar Neural CRF Parser (SuPar) is a twostage parser, that, similarly to the Berkeley parser, produces a constituent and then a label. It uses a Scalar mix Tenney et al., 2019a,b) of the last four layers for each subtoken of a word. Additionally, it uses a BiLSTM encoder to compute context aware representations by employing two different MLP layers indicating both left and right word boundaries. Each candidate is scored over the two representations using a biaffine operation (Dozat and Manning, 2017) , while the CKY algorithm is used when parsing to obtain the best tree.We choose to experiment on two languages that contain treebanks representative of different genres, the English Webocorpus Treebank (Petrov and McDonlad, 2012) and the Koala Eukalyptus Corpus (Adesam et al., 2015) .English Webcorpus Treebank (EWT) was introduced in the 2012 shared task on Web Parsing and consists of five subareas: Yahoo answers, emails, Newsgroup texts, product reviews, and Weblog entries. The treebank follows an English Penn Treebank (Marcus et al., 1993) style annotation scheme with some additional POS tags to account for specific annotation needs, resulting in 50 POS tags and 28 phrase heads. We removed unary nodes, traces, and function labels during preprocessing.Swedish Eukalyptus Treebank (SET) consists of: blog entries from the SIC corpus (Östling, 2013), parts of Swedish Europarl (Koehn, 2005) , chapters from books, public information gathered from government and health information sites, and Wikipedia articles, and contains only 13 POS tags and 10 phrases heads. The treebank's annotation scheme is derived from the TiGer Treebank of German (Brants et al., 2004) . Notably this includes discontinuous constituents, resulting in the need to uncross the branches of the extracted treebank. We follow the procedure used for TiGer, namely the transformation process proposed by Boyd (2007) using treetools, 3 and additionally remove all function labels.Data Splits The EWT is traditionally used as dev and test sets for examining the out-of-domain adaptability of models developed on the English PTB (Petrov and McDonlad, 2012) , and we are not aware of any standard splits for the EWT nor of standard splits for the SET. For this reason we chose to split each genre within the treebanks into approximately sequential 80/10/10 splits, with selected treebank statistics presented in Table 1 . For cross-genre experiments, EWT and SET subgenres are concatenated respectively.We use four different embeddings in our experiments: both bert-base-multilingual-cased and bert-base-cased (Devlin et al., 2019) , bert-largeswedish-uncased, 4 and bert-base-swedish-cased (Malmsten et al., 2020 bert-large-swedish-uncase (swBERT) was trained on Swedish Wikipedia (300M words).bert-base-swedish-cased (kbBERT) was trained using newspapers (2,977 M words), government publications (117M words), legally available e-deposits (62M words), 7 internet forums 4 https://github.com/af-ai-center/ SweBERT 5 https://github.com/Kungbib/ swedish-bert-models 6 We do not consider this to mean there are 16 distinct genres as we define the term, rather to note the more diversified domains, though author style would naturally influence any learned representations.7 Including governmental releases, books, and magazines.(31M words), and Swedish Wikipedia (29M words).
2
The evaluation is based on wordnet reconstruction task proposed in : randomly selected words are removed from a wordnet and next the expansion algorithm is applied to reattach them. Removing of every word changes wordnet structure, so it is best to remove one word at a time, but due to the efficiency, small word samples are processed in one go. As the algorithm may produce multiple attachment suggestions for a word, they are sorted according to semantic support of the suggested attachments. A histogram of distances between a suggested attachment place and the original synset is built. We used two approaches to compute the distance between the proposed and original synsets. According to the first, called straight, a proper path can include only hypernymy or hyponymy links (one direction only per path), and one optional final meronymic link. Only up to 6 links are considered, as longer paths are not useful suggestions for linguists.In the second approach, called folded, shorter paths are considered, up to 4 links. Paths can include both hypernymy and hyponymy links, but only one change of direction and an optional meronymic link must be final. In this approach we consider close cousins (co-hyponyms) as valuable suggestions for linguists.The collected results are analysed according to three strategies. In the closest path strategy we analyse only one attachment suggestion per lemma that is the closest to any of its original locations. In the strongest, only one suggestion with the highest support for a lemma is considered. In the all strategy all suggestions are evaluated.A set of test words was selected randomly from wordnet words according to the following conditions. Only words of the minimal frequency corpus 200 were used due to the applied methods for relation extraction. Moreover, only words located further than 3 hyponymy links from the top were considered, as we assumed that the upper parts are constructed manually in most wordnets.For the sake of comparison with (Snow et al., 2006) and two similar KSs were built: a hypernym classifier and a cousin classifier. The first (Snow et al., 2004) was trained on English Wikipedia corpus (1.4 billion words) parsed by Minipar (Lin, 1993) . We extracted all patterns linking two nouns in dependency graphs and occurring at least five times and used them as features for logistic regression classifier from Li-bLINEAR. Word pairs classified as hyperonymic were described by probabilities of positive decisions. Following , the cousin classifier was based on distributional similarity instead of text clustering as the clustering method was not well specified in (Snow et al., 2006) . The cousin classifier is meant to predict (m, n)-cousin relationship between words. The classifier was trained to recognize two classes: 0 ≤ m, n ≤ 3 and the negative. The measure of Semantic Relatedness (MSR) was used to produce input features to the logistic regression classifier. MSR was calculated as a cosine similarity between distributional vectors: one vector per a word, each vector element corresponds to the frequency of co-occurrences with other words in the selected dependency relations. Co-occurrence frequencies were weighted by PMI.A sample of 1064 test words was randomly selected from WordNet 3.0. It is large enough for the error margin 3% and 95% confidence level (Israel, 1992) . Trained classifiers were applied to every pair: a test word and a noun from WordNet.As a baseline we used the well known and often cited algorithm PWE (Snow et al., 2006) . Its performance strongly depends on values of predefined parameters. We tested several combinations of values and selected the following ones: mini-mal probability of evidence: 0.1, inverse odds of the prior: k = 4, cousins neighbourhood size: (m, n) ≤ (3, 3), maximum links in hypernym graph: 10, penalization factor: λ = 0.95.In Paintball probability values produced by the classifiers were used as weights. The hypernym classifier produces values from the range 0, 1].Values from the cousin classifier were mapped to the same range by multiplying them by 4. Values of the parameters were set heuristically in relation to the weight values as follows: τ 0 = 0.4, τ 3 = τ 0 , τ 4 = 0.8, = 0.14 and µ = 0.65.Transmittance was used to define links for support spreading in Paintball. The graph was formed by hyper/hyponymy (H/h), holo/meronymy (o/m), antonymy (a) and synonymy (represented by synsets). Transmittance is f T (r, v) = α * v, where alpha was: 0.7 for hypernymy, 0.6 for mero/holonymy and 0.4 for antonymy. The parameter α was 1 for other selected relations and 0 for non-selected. Impedance allows for controlling the shape of the spreading graph. Here, the impedance function is defined as:f I (r 1 , r 2 , v) = β * v, where β ∈ {0, 1}.We selected heuristically β = 0 for the following pairs: h, a , h, m. , H, h , H, o , a, a , a, m , a, o , m, a and o, a .Paintball and PWE algorithms were tested on the same word sample, the results are presented in Tab. 1 and 2. Test words were divided into two sub-samples: frequent words, >1000 occurrences (Freq in tables) and infrequent, ≤999 (Rare in tables), as we expected different precision and coverage of KSs. Statistically significant results were marked with a '*'. We rejected the null hypothesis of no difference between results at significance level α = 0.05. The paired t-test was used.Considering straight paths and their maximal length up to 6 links PWE performs slightly better than Paintball. Coverage for words and senses is also higher for PWE: 100% (freq.: 100%) 44.79% (43.93%) than for Paintball: 63.15% (freq.: 91.63%) and 24.66% (26.62%). However, a closer analysis reveals that PWE shows a tendency to find suggestions in larger distances from the proper place. If we take into account only suggestions located up to 3 links -the column [0,2] in Tab. 1, than the order is different: Paintball is significantly better than PWE. Paintball mostly suggests more specific synsets for new words and ab- Table 2 : Folded path evaluation strategy: PWE and Paintball precision on WordNet 3.0 .stains in the case of the lack of evidence, e.g., forx=feminism, PWE suggests the following synset list: {abstraction, abstract entity}, {entity}, {com-munication}, {group, grouping}, {state} while suggestions of Paintball, still not perfect, are more specific: {causal agent, cause, causal agency}, {change}, {political orientation, ideology, political theory}, {discipline, subject, subject area, subject field, field, field of study, study, baili-wick}, {topic, subject, issue, matter}. PWE very often suggests abstract and high level synsets like: {entity}, { event}, {object}, {causal agent, cause, causal agency} etc. They dominate whole branches and are in a distance non-greater than 6 links to many synsets. Paintball outperforms PWE in the evaluation based on the folded paths. For more than half test words, the strongest proposal was in the right place or up to a couple of links from it. Suggestions were generated for 72.65% of lemmas and the sense recall was 24.63% that is comparable with other algorithms.
2
In this section, we first introduce a soft prompt tuning method for sentiment classification that utilizes soft prompts to capture domain-specific knowledge. Then we present a domain adversarial training method for domain adaptation. Finally, we describe the overall learning procedure.Prompt tuning is an approach to add extra information for PLMs by reformulating downstream tasks as cloze questions. The primary components include a template and a set of label words, where the template is a background description of current task and the label words are the high-probability vocabulary predicted by PLMs in the current context. In the binary sentiment classification, we denote the input sentence as x = [w 1 , . . . , w n ], the output label as y. Here y ∈ Y, and the label space Y = {positive, negative}. Prompt tuning formalizes the classification task into a MLM task. Given a PLM M and its vocabulary V, a prompt consists of a template function T (•) that converts the input sentence x to a prompt input x prompt = T (x) with the [MASK] token and a set of label words V * ⊂ V, which are connected with the label space through a mapping function v : Y → V * . As shown in Figure 2 where e(•) represents the embedding function of M.Here we can denote a PLM M as a function mapping from x prompt to the feature representation and vocabulary distribution of the [MASK] token, represented as: EQUATIONV * = ... (x 1 1 ) (x 2 1 ) (x 1 1 ) ... (x 1 −1 ) (x 2 −1 ) (x −1 −1 ) ... (x 1 ) (x 2 ) (x ) ("[MASK]")...("[MASK]")... {good, bad}. So,EQUATIONGiven an annotated dataset S = {x i , y i } N i=1 , the training objective for soft prompt tuning is obtained using the binary cross-entropy loss,L class (S; θ M,p,f ) = − N i=1 log p(y i |x i ) I{ŷ i =1} + log(1 − p(y i |x i )) I{ŷ i =0} (4)whereŷ i represents the ground truth label ranging from 1 as the positive label and 0 as the negative label). θ M,p,f represents the overall trainable parameters of the PLM M, several learnable vectors p and the MLM head function f .For the same task in different domains, domain adversarial training can not only transfer the generic knowledge from source domains to the target domain, but also train more domain-aware classifiers. As shown in Figure 2 , domain adversarial training aims to make the feature distributions of the [MASK] position from different domains closer. More intuitively, it will encourage the MLM head classifer to obtain domain-invariant features across domains.Based on the hidden representation h [MASK] by the PLM, the detailed process of domain adversarial training is as follows: given m (m ≥ 1) source domains, we assume that between each source domain S l (l ∈ [1, . . . , m]) and the target domain T have a domain discriminative function g l : R h → D that discriminates between the source domain and the target domain, where the domain label set is represented as D = {0, 1}, 0 is the source domain label, and 1 is the target domain label. To this end, there are m domain discriminators, denoted as g = {g l } m l=1 . Given an input example x from either the l-th (l ∈ [1, . . . , m]) source domain or the target domain, we first obtain the task-specific head representation h [MASK] by M and then model the probability p(d|x) for discriminating the domain label d ∈ D as:EQUATIONGivenm source domain datasetŜ = {S l } m l=1 = {{x s i } N s l i=1 } m l=1 and a target domain dataset T = {x t i } N t i=1 ,where N s l is the number of samples in the l-th source domain and N t is the number of samples in the target domain, the domain discriminative objective is to minimize the following cross-entropy loss,EQUATIONwhered i represents the truth domain label and θ M,p,g represents the overall trainable parameters of the PLM M, several learnable vectors p and m domain discriminators g.The domain adversarial training among m source domains and the target domain can be seen as a two-player minimax game where the domain classifiers g = {g l } m l=1 tend to minimize the domain discrimination loss so as to make the domain discriminators strong while the PLM M tends to maximize the domain discrimination loss so as to weaken the domain discrimination.Formally, the domain adversarial training objective w.r.t. to g, p and M can be represented as:EQUATION4.3 Learning Procedure Joint training objective. Given m source do-mainsŜ and a target domain T , the sentiment classifier and the domain discriminator are jointly trained for optimizing the PLM M, soft prompt embeddings p, MLM head function f and domain discriminators g, and the final training objective is formally represented as:EQUATIONwhere λ is a trade-off parameter. The sentiment classification objective L class and the domain discrimination objective L domain are defined in Eq. (4) and Eq. (6), respectively. = {S l } m l=1 = {{x s i , y s i } N s l i=1 } m l=1 and a target domain dataset T = {x t i } N t i=1; the number of training iterations n. Output: Configurations of AdSPT θ M,p,f,g Initialize: PLM θM; soft prompt embeddings θp; MLM head function θ f ; domain discriminator {θg l } m l=1 ; learning rate η; trade-off parameter λ.1: while Training steps not end do 2: for d in {Source, Target} do 3:if d = Source then 4:for l in {1, . . . , m} do 5:L class ← L class (S l ; θ M,p,f ) 6: L domain ← L domain (S l , T ; θM,p,g l )# Minimizing the MLM head classification loss 7:θ f ← θ f − ∇ θ f L class # Minimizing the domain discrimination loss 8: θg l ← θg l − ∇ θg l L domain 9:end for # Minimizing the sentiment classification loss 10:θM,p ← θM,p − ∇ θ M,p (λL class − L domain ) 11:end if 12:end for 13: end while target domain are mapped to different domain discriminators to train the PLM M, several learnable vectors p and the domain discriminator g l . The corresponding domain discrimination loss is computed in line 6. The sentiment classification loss is used for updating the parameters of the PLM, several learnable vectors and the MLM head function (line 7, 10). The domain discrimination loss is used for updating the parameters of the PLM, several learnable vectors and the domain discriminators. Obviously, the parameters of the PLM and several learnable vectors be updated together by the above two losses.
2
In this section, we first give a brief overview of the mBART model (Liu et al., 2020a) , and then we introduce our proposed method that aims to adapt mBART to unseen languages in the translation task.The mBART model follows the sequence-tosequence (Seq2Seq) pre-training scheme of the BART model (Lewis et al., 2020) (i.e., reconstructing the corrupted text) and is pre-trained on largescale monolingual corpora in 25 languages. Two types of noises are used to produce the corrected text. The first is to remove text spans and replace them with a mask token, and the second is to permute the order of sentences within each instance. Thanks to the large-scale pre-training on multiple diverse languages, the mBART model has shown its strength at building low-resource NMT systems by being fine-tuned to the target language pair, and it is also shown to possess a powerful generalization ability to languages that do not appear in the pre-training corpora (Liu et al., 2020a) .Despite the powerful adaptation ability that mBART possesses, we argue that its performance on unseen languages is still sub-optimal since it has to learn these languages from scratch. Therefore, we propose to conduct the continual pre-training (CPT) on the mBART model to improve its adap-tation ability to unseen languages. The process of this additional pre-training task is illustrated in Figure 1 , and the details are described as follows.Pre-Training We denote lang 1 →lang 2 as the needed translation pair, where lang 1 is the source language and lang 2 is the target language, and at least one of them is an unseen language for the mBART model. The CPT can be considered as maximizing L θ :EQUATIONwhere θ is initialized with mBART's parameters, D 2 denotes a collection of monolingual documents in lang 2 , and f is a function to generate noisy mixed-language text that contains both lang 1 and lang 2 .Noisy Mixed-Language Function (f ) Given a monolingual instance X, we first use the noise function (denoted as g, described in §2.1) used in Liu et al. (2020a) to corrupt the text, and then we use a dictionary of lang 2 to lang 1 to assist in the function of producing mixed-language sentences (denoted as h). Specifically, after the processing of the noise function g, if the non-masked tokens in lang 2 exist in the dictionary, we set a probability to replace it with its translation in lang 1 . If it is not being replaced, there is a 50% chance that we will directly delete this token, and otherwise, we keep the original token in lang 2 . More formally, function f (in Eq. (1)) can be considered as the combination of two functions:EQUATIONNotice that lang 2 is not always the unseen language (i.e., lang 1 could be the only unseen language). Since the inputs are mixed with the tokens in lang 1 and lang 2 , the model can always learn the unseen language. The reason why we choose to reconstruct lang 2 instead of lang 1 is because lang 2 is the target language that the decoder needs to generate in the translation task, and reconstructing lang 2 in the pre-training makes the model easier to adapt to the lang 1 →lang 2 translation pair. We leverage the noise function g since it has shown its effectiveness at helping pre-trained models to obtain language understanding ability. The intuition of producing mixed-language text for inputs is to roughly align lang 1 and lang 2 , since the model needs to understand the tokens of lang 1 so as to reconstruct the translations in lang 2 . The purpose of not replacing all tokens in the dictionary with their translations is to increase the variety of the mixedlanguage text, and given that there will be plenty of frequent words (e.g., stopwords), replacing all of them with the corresponding translations could make the sentences unnatural, and the translations of the frequent words in lang 1 would likely not match the context in lang 2 . In addition, adding a probability to delete the original token in function h is to inject extra noise and further increase the diversity of the generated mixed-language text.3 Experimental Settings
2
Structural Correspondence Learning (Blitzer et al., 2006) uses only unlabeled data to find a common feature representation for a source and a target domain. The idea is to first manually identify "pivot" features that are likely to have similar behavior across both domains. SCL then learns a transformation from the remaining non-pivot features into the pivot feature space. The result is a new set of features that are derived from all the non-pivot features, but should be domain independent like the pivot features. A classifier is then trained on the combination of the original and the new features. Table 1 gives the details of the SCL algorithm. First, for each pivot feature, we train a linear classifier to predict the value of that pivot feature using only the non-pivot features. The weight vectors learned for these linear classifiers,ŵ i , are then concatenated into a matrix, W , which represents a projection from non-pivot features to pivot features. Singular value decomposition is used to reduce the dimensionality of the projection matrix, yielding a reduced-dimensionality projection matrix θ. Finally, a classifier is trained on the combination of the original features and the features generated by applying the reduced-dimensionality projection matrix θ to the non-pivot features x [p:m] .Standard SCL does not define how pivot features are selected; this must be done manually for each new task. However, SCL does provide standard definitions for the loss function (L), the conversion to binary values (B i ), the dimensionality of the new correspondence space (d), and the feature combination function (C).L is defined as Huber's robust loss:L(a, b) = max(0, 1 − ab) 2 if ab ≥ −1 −4ab otherwiseThe conversion from pivot feature values to binary classification is defined as:B i (y) = 1 if y > 0 0 otherwiseA few different dimensionalities for the reduced feature space have been explored (Prettenhofer and Stein, 2011) , but most implementations have followed the standard SCL description (Blitzer et al., 2006) with d defined as:d = 25The feature combination function, C, is defined as simple concatenation, i.e., use all of the old pivot Input: • θ ∈ R n×d , a projection from non-pivot features to the correspondence space • h : R m+d → A, the trained predictor Algorithm:• S = {x : x ∈ R m },1. For each pivot feature i :0 ≤ i < p, learn prediction weightsŵ i = min w∈R n x∈U L(w x [p:m] , B(x i ))2. Construct a matrix W ∈ R n×p using eachŵ i as a column 3. Apply singular value decomposition W = U ΣV where U ∈ R n×n , Σ ∈ R n×p , V ∈ R p×p 4. Select the reduced-dimensionality projection, θ = U [0:d,:] 5. Train a classifier h from [C(x, x [p:m] θ), f (x) : x ∈ SC(x, z) = [x; z]We call this the pivot+nonpivot+new setting of C. The following sections discuss alternative parameter choices for pivot features, B i , d, and C.The SCL algorithm depends heavily on the pivot features being domain-independent features, and as discussed in Section 2, which features make sense as pivot features varies widely by task. No previous studies have explored structural correspondence learning for authorship attribution, so one of the outstanding questions we tackle here is how to identify pivot features. Research has shown that the most discriminative features in attribution and the most robust features across domains are character n-grams (Stamatatos, 2013; Sapkota et al., 2014) . We thus consider two types of character n-grams used in authorship attribution that might make good pivot features.Classical character n-grams are simply the sequences of characters in the text. For example, given the text:The structural correspondence character 3-gram features would look like:"The", "he ", "e s", " st", "str", "tru", "ruc", "uct", ...We propose to use as pivot features the p most frequent character n-grams. For non-pivot features, we use the remaining features from prior work (Sapkota et al., 2014). These include both the remaining (lower frequency) character n-grams, as well as stop-words and bag-of-words lexical features. We call this the untyped formulation of pivot features. Sapkota et al. (2015) showed that classical character n-grams lose some information in merging together instances of n-grams like the which could be a prefix (thesis), a suffix (breathe), or a standalone word (the). Therefore, untyped character n-grams were separated into ten distinct categories. Four of the ten categories are related to affixes: prefix, suffix, space-prefix, and space-suffix. Three are wordrelated: whole-word, mid-word, and multi-word. The final three are related to the use of punctuation: beg-punct, mid-punct, and end-punct. For example, the character n-grams from the last section would instead be replaced with:"whole-word:The", "space-suffix:he ", "multi-word:e s", "space-prefix: st", "prefix:str", "mid-word:tru", "mid-word:ruc", "mid-word:uct", ... Sapkota et al. (2015) demonstrated that n-grams starting with a punctuation character (the beg-punct category) and with a punctuation character in the middle (the mid-punct category) were the most effective character n-grams for cross-domain authorship attribution. We therefore propose to use as pivot features the p/2 most frequent character ngrams from each of the beg-punct and mid-punct categories, yielding in total p pivot features. For non-pivot features, we use all of the remaining features of Sapkota et al. (2015) . These include both the remaining (lower frequency) beg-punct and mid-punct character n-grams, as well as all of the character n-grams from the remaining eight categories. We call this the typed formulation of pivot features. 2Authorship attribution typically relies on countbased features. However, the classic SCL algorithm assumes that all pivot features are binary, so that it can train binary classifiers to predict pivot feature values from non-pivot features. We propose a binarization function to produce a binary classification problem from a count-based pivot feature by testing whether the feature value is above or below the feature's median value in the training data:B i (y) = 1 if y > median({x i : x ∈ S ∪ U }) 0 otherwiseThe intuition is that for count-based features, "did this pivot feature appear at least once in the text" is not a very informative distinction, especially since the average document has hundreds of words, and pivot features are common. A more informative distinction is "was this pivot feature used more or less often than usual?" and that corresponds to the below-median vs. above-median classification.The reduced dimensionality (d) of the low-rank representation varies depending on the task at hand, though lower dimensionality may be preferred as it will result in faster run times. We empirically compare different choices for d: 25, 50, and 100. We also consider the question, how critical is dimensionality reduction? For example, if there 2 Because the untyped and typed feature sets are designed to directly replicate Sapkota et al. (2014) and Sapkota et al. (2015) , respectively, both include character n-grams, but only untyped includes stop-words and lexical features. are only p = 100 pivot features, is there any need to run singular-value decomposition? The goal here is to determine if SCL is increasing the robustness across domains primarily through transforming non-pivot features into pivot-like features, or if the reduced dimensionality from the singular-value decomposition contributes something beyond that.It's not really clear why the standard formulation of SCL uses the non-pivot features when training the final classifier. All of the non-pivot features are projected into the pivot feature space in the form of the new correspondence features, and the pivot feature space is, by design, the most domain independent part of the feature space. Thus, it seems reasonable to completely replace the nonpivot features with the new pivot-like features. We therefore consider a pivot+new setting of C:pivot+new: C(x, z) = [x [0:p] ; z]We also consider other settings of C, primarily for understanding how the different pieces of the SCL feature space contribute to the overall model.pivot: C(x, z) = x [0:p] nonpivot: C(x, z) = x [p:m] new: C(x, z) = z pivot+nonpivot: C(x, z) = xNote that the pivot+nonpivot setting corresponds to a model that does not apply SCL at all.
2
In this work, we approach the task of linguistic metaphor detection as a classification problem. Starting from a known target domain (i.e. Governance), we first produce a target domain signature which represents the target-specific dimensions of the full conceptual space. Using this domain signature, we are able to separate the individual terms of a sentence into source frame elements and target frame elements and to independently perform a semantic expansion for each set of elements using WordNet and Wikipedia as described in our earlier work (Bracewell et al., 2013) . Taken together, the semantic expansions of a text's source frame elements and target frame elements make up the full semantic signature of the text which can then be compared to an index of semantic signatures generated for a collection of manually detected metaphors. We use as features for our classifiers a set of metrics that are able to quantify the similarity between the given semantic signature and the signatures of metaphors found within the index.In order to produce a semantic representation of the text, we first build a target domain signature, which we define as a set of highly related and interlinked WordNet senses that correspond to our particular target domain with statistical reliability. For example, in the domain of Governance the concepts of "law", "government", and "administrator", along with their associated senses in WordNet, are present in the domain signature. We generate this signature using semantic knowledge encoded in the following resources: (1) the semantic network encoded in WordNet; (2) the semantic structure implicit in Wikipedia; and (3) collocation statistics taken from the statistical analysis of a large corpora. In particular, we use Wikipedia as an important source of world knowledge which is capable of providing information about concepts, such as named entities, that are not found in WordNet as shown in several recent studies (Toral et al., 2009; Niemann and Gurevych, 2011) . For example, the organization "Bilderberg Group" is not present in Word-Net, but can easily be found in Wikipedia where it is listed under such categories as "Global trade and professional organizations", "International business", and "International non-governmental organizations". From these categories we can determine that the "Bilderberg Group" is highly related to WordNet senses such as "professional organization", "business", "international", and "nongovernmental organization".We begin our construction of the domain signature by utilizing the semantic markup in Wikipedia to collect articles that are highly related to the target concept by searching for the target concept (and optionally content words making up the definition of the target concept) in the Wikipedia article titles and redirects. These articles then serve as a "seed set" for a Wikipedia crawl over the intra-wiki links present in the articles. By initiating the crawl on these links, it becomes focused on the particular domain expressed in the seed articles. The crawling process continues until either no new articles are found or a predefined crawl depth (from the set of seed articles) has been reached. The process is illustrated in Figure 1 . The result of the crawl is a set of Wikipedia articles whose domain is related to the target concept. From this set of articles, the domain signature can be built by exploiting the semantic information provided by WordNet.The process of going from a set of target concept articles to a domain signature is illustrated in Figure 2 and begins by associating the terms contained in the gathered Wikipedia articles with all of their possible WordNet senses (i.e. no word sense disambiguation is performed). The word senses are then expanded using the lexical (e.g. derivationally related forms) and semantic relations (e.g. hypernym and hyponym) available in WordNet. These senses are then clustered to eliminate irrelevant senses using the graph-based Chinese Whispers algorithm (Biemann, 2006) . We transform our collection of word senses into a graph by treating each word sense as a vertex of an undirected, fully-connected graph where edge weights are taken to be the product of the Hirst and St-Onge (1998) WordNet similarity be-tween the two word senses and the first-order corpus cooccurrence of the two terms. In particular, we use the normalized pointwise mutual information as computed using a web-scale corpus.The clusters resulting from the Chinese Whispers algorithm contain semantically and topically similar word senses such that the size of a cluster is directly proportional to the centrality of the concepts within the cluster as they pertain to the target domain. After removing stopwords from the clusters, any clusters below a predefined size are removed. Any cluster with a low 2 average normalized pointwise mutual information (npmi) score between the word senses in the cluster and the word senses in the set of terms related to the target are likewise removed. This set of target-related terms used in calculating the npmi are constructed from the gathered Wikipedia articles using TF-IDF (term frequency inverse document frequency), where TF is calculated within the gathered articles and IDF is calculated using the entire textual content of Wikipedia. After pruning clusters based on size and score, the set of word senses that remain are taken to be the set of concepts that make up the target domain signature.After constructing a signature that defines the domain of the target concept, it is possible to use this signature to map a given text (e.g. a sentence) into a multidimensional conceptual space which allows us to compare two texts directly based on their conceptual similarity. This process begins by mapping the words of the text into WordNet and extracting the four most frequent senses for each term. In order to improve coverage and to capture entities and terms not found in WordNet, we also map terms to Wikipedia articles based on a statistical measure which considers both the text of the article and the intra-wiki links. The Wikipedia articles are then mapped back to WordNet senses using the text of the categories associated with the article. In the next step, source and target frame elements of a given text are separated using the Word-Net senses contained in the target domain signature.Terms in the text which have some WordNet sense that is included in the domain signature are classified as target frame elements while those that do not are considered source frame elements. Figure 3 shows an overview of the process for determining the source and target concepts within a text. The remainder of the signature induction process is performed separately for the source and target frame elements. In both cases, the senses are expanded using the lexical and semantic relations encoded in Word-Net, including hypernymy, domain categories, and pertainymy. Additionally, source frame elements are expanded using the content words found in the glosses associated with each of the noun and verb senses. Taken together, these concepts represent the dimensions of a full conceptual space which can be separately expressed as the source concept dimensions and target concept dimensions of the space. In order to determine the correct senses for inclusion in the semantic signature of a text, clustering is performed using the same methodology as in the construction of the domain signature. First, a graph is built from the senses with edge weights assigned based on WordNet similarity and cooccurrence. Then, the Chinese Whispers algorithm is used to cluster the graph which serves to disambiguate the senses and to prioritize which senses are examined and incorporated into the source concept dimensions of the conceptual space. Word senses are prioritized by ranking the clusters based on their size and on the highest scoring word sense contained in the cluster using:rank(c) = size(c) • s score(s) |c| (1)where c is the cluster, s is a word sense in the clus-ter, and |c| is the total number of word senses in the cluster. The senses are scored using: (1) the degree distribution of the sense in the graph (more central word senses are given a higher weight); and (2) the length of the shortest path to the terms appearing in the given text with concepts closer to the surface form given a higher weight. Formally, score(s) is calculated as:score(s) = degree(s) + dijkstra(s, R) 2 (2)where degree(s) is degree distribution of s and dijkstra(s, R) is the length of the shortest path in the graph between s and some term in the original text, R. Clusters containing only one word sense or with a score less than the average cluster score (µ c ) are ignored. The remaining clusters and senses are then examined for incorporation into the conceptual space with senses contained in higher ranked clusters examined first. Senses are added as concepts within the conceptual space when their score is greater than the average word sense score (µ s ). To decrease redundancy in the dimensions of the conceptual space, neighbors of the added word sense in the graph are excluded from future processing.Given a semantic signature representing the placement of a text within our conceptual space, it is possible to measure the conceptual distance to other signatures within the same space. By mapping a set of known metaphors into this space (using the process described in Section 3.2), we can estimate the likelihood that a given text contains some metaphor (within the same target domain) by using the semantic signature of the text to find the metaphors with the most similar signatures and to measure their similarity with the original signature.We quantify this similarity using five related measures which are described in Table 2 . Each of these features involves producing a score that ranks every metaphor in the index based upon the semantic signature of the given text in a process similar to that of traditional information retrieval. In particular, we use the signature of the text to build a query against which the metaphors can be scored. For each word sense included in the semantic signature, we add a clause to the query which combines the vector space model with the Boolean model so as to prefer a high overlap of senses without requiring an identical match between the signatures. 3 Three of the features simply take the score of the highest ranked metaphor as returned by a query. Most simply, the feature labeled Max Score (naïve) uses the full semantic signature for the text which should serve to detect matches that are very similar in both the source concept dimensions and the target concept dimensions. The features Max Score (source) and Max Score (target) produce the query using only the source concept dimensions of the signature and the target concept dimensions respectively.The remaining two features score the metaphors within the source dimensions and the target dimensions separately before combining the results into a joint score. The feature Max Score (joint) calculates the product of the scores for each metaphor using the source-and target-specific queries described above and selects the maximum value among these products. The final feature, Joint Count, represents the total number of metaphors with a score for both the source and the target dimensions above some threshold (µ j ). Unlike the more naïve features for which a very good score in one set of dimensions may incorrectly lead to a high overall score, these joint similarity features explicitly require metaphors to match the semantic signature of the text within both the source and target dimensions simultaneously.Altogether, these five features are used to train a suite of binary classifiers to make a decision on whether a given text is or is not a metaphor.
2
Parts of the NER system we use for the annonymisation originate from the work conducted between 2001-03 in the Nomen-Nescio (cf. Bondi Johannessen et al., 2005) . Five are the major components of the Swedish system:• lists of multiword entities• a rule-based component that uses finitestate grammars, one grammar for each type of entity recognized • a module 1 that uses the annotations produced by the previous two components in order to make decisions regarding entities not covered by the previous two modules 2 • lists of single names (approx. 80 000)• a revision/refinement module which makes a final control on an annotated document with entities in order to detect and resolve possible errors and assign new annotations based on existing ones, e.g. by combining annotation fragments.In the current work, seven types of NEs are recognized 3 : persons, locations, organizations, names of drugs and diseases, time expressions and a set of different types of measure expressions such as "age" and "temperature" (Table 1) . The annotation uses the XML identifiers ENAMEX, TIMEX and NUMEX; for details see Kokkinakis (2004) . The lack of annotated data in the domain prohibits us from using, and thus training, a statistically 1 The module is inspired by the document centred approach by Mikheev et al. (1999) . This is a form of on-line learning from documents under processing which looks at unambiguous usages for assigning annotations in ambiguous words. A similar method has been also used by Aramaki et al., 2006, called labelled consistency for de-identification of PHIs.based system. Since high recall is a requirement, and due to the fragmented, partly ungrammatical nature of the data, the rule-based component of the system seemed an appropriate mechanism for the anonymisation task. Only minor parts of the generic system have been modified. These modifications dealt with: i) multiword place entities with the designators "VC", "VåC", "Vårdc" and "Vårdcentral" in attributive or predicative position, which all translate to Health Care Center, e.g. "Tuve VC" or "VåC Tuve" -designators frequent in the domain, which were inserted into the rulebased component of the system; ii) the designators "MAVA" acute medical ward, "SS", "SS/SU" and "SS/Ö", where "SS" stands as an acronym for the organization "Sahlgrenska Sjukhuset" Sahlgrenska Hospital and iii) the development and use of medical terminology, particularly names of pharmaceutical names (www.fass.se) and diseases, particularly eponyms (mesh.kib.ki.se), in order to cover for a variety of names that conflict with regular person names. E.g., the drug name "Lanzo" lansoprazol is also in the person's name list, while "Sjögrens" in the context "Sjögrens syndrom" Sjogren's syndrome and "Waldenström" in the context Mb Waldenström could also be confused with frequent Swedish last names. Therefore, the drug's and disease's modules (which were also evaluated, see Section 6) are applied before the person/location in order to prohibit erroneous readings of PHIs.An example of annotated data, before [a] and after [b] anonymisation, is given below. The content of anonymised NEs (b) is translated as: uppercase X for capital letters, lowcase x for lower case characters, and N for numbers, while punctuation remains unchanged. The number of the dummy characters in each anonymised NE corresponds to the length of the original NE. However other translation schemes are under consideration. Examples of various NE types are given in table 1.
2
In this work we focus on languages which generate inflections by adding suffixes to the stems of words, as happens, for example, with Romance languages; our approach, however, could be easily adapted to inflectional languages based on different ways of adding morphemes. Let P = {p i } be the set of paradigms in a monolingual dictionary. Each paradigm p i defines a set of suffixes F i = {f ij } which are appended to stems to build new inflected word forms, along with some additional morphological information. The dictionary also includes a list of stems, each labelled with the index of a particular paradigm; the stem is the part of a word that is common to all its inflected variants. Given a stem/paradigm pair composed of a stem t and a paradigm p i , the expansion I(t, p i ) is the set of possible word forms resulting from appending all the suffixes in p i to t. For instance, an English dictionary may contain a paradigm p i with suffixes F i = { ,-s, -ed, -ing} ( denotes the empty string), and the stem want assigned to p i ; the expansion I(want, p i ) consists of the set of word forms want, wants, wanted and wanting. We also define a candidate stem t as an element of Pr(w), the set of possible prefixes of a particular word form w. Given a new word form w to be added to a monolingual dictionary, our objective is to find both the candidate stem t ∈ Pr(w) and the paradigm p i which expand to the largest possible set of morphologically correct inflections. To that end, our method performs three tasks: obtaining the set of all compatible stem/paradigm candidates which generate, among others, the word form w when expanded; giving a confidence score to each of the stem/paradigm candidates so that the next step is as short as possible; and, finally, asking the user about some of the inflections derived from each of the stem/paradigm candidates obtained in the first step. Next we describe the methods used for each of these three tasks.It is worth noting that in this work we assume that all the paradigms for the words in the dictionary are already included in it. The situation in which for a given word no suitable paradigm is available in the dictionary will be tackled in the future, possibly by following the ideas in related works (Monson, 2009) .The first step for adding a word form w to the dictionary is to detect the set of compatible paradigms. To do so, we use a generalised suffix tree (GST) (McCreight, 1976) containing all the possible suffixes included in the paradigms in P . Each of these suffixes is labelled with the index of the corresponding paradigms. The GST data structure allows to retrieve the paradigms compatible with w by efficiently searching for all the possible suffixes of w; when a suffix is found, the prefix and the paradigm are considered as a candidate stem/paradigm pair. In this way, a list L of candidate stem/paradigm pairs is built; we will denote each of these candidates with c n .The following example illustrates this stage of our method. Consider a simple dictionary with only three paradigms:p 1 : f 11 = , f 12 =-s p 2 : f 21 =-y, f 22 =-ies p 3 : f 31 =-y, f 32 =-ies, f 33 =-ied, f 34 =-yingAssume that a user wants to add the new word w=policies to the dictionary. The candidate stem/paradigm pairs which will be obtained after this stage are:c 1 =policies/p 1 , c 2 =policie/p 1 , c 3 =polic/p 2 , c 4 =polic/p 3 2.2 Paradigm ScoringOnce L is obtained, a confidence score is computed for each stem/paradigm candidate c n ∈ L using a large monolingual corpus C. One possible way to compute the score isScore(c n ) = ∀w ∈I(cn) Appear C (w ) |I(c n )| ,where Appear C (w ) is a function that returns 1 when the inflected form w appears in the corpus C and 0 otherwise, and I is the expansion function as defined before. The square root term is used to avoid very low scores for large paradigms which include lot of suffixes. One potential problem with the previous formula is that all the inflections in I(c n ) are taken into account, including those that, although morphologically correct, are not very usual in the lan-guage and, consequently, in the corpus. To overcome this, Score(c n ) is redefined asScore(c n ) = ∀w ∈I C (cn) Appear C (w ) |I C (c n )| ,where I C (c n ) is the difference setI C (c n ) = I(c n ) \ Unusual C (c n ).The function Unusual C (c n ) uses the words in the dictionary already assigned to p i as a reference to obtain which of the inflections generated by p i are not usual in the corpus C. Let T (p i ) be a function retrieving the set of stems in the dictionary assigned to the paradigm p i . For each of the suffixes f ij in F i our system computesRatio(f ij , p i ) = ∀t∈T (p i ) Appear C (tf ij ) |T (p i )| ,and builds the set Unusual C (c n ) by concatenating the stem t to all the suffixes f ij with Ratio(f ij , p i ) under a given threshold Θ. Following our example, the following inflections for the different candidates will be obtained:I(c 1 )={policies, policiess} I(c 2 )={policie, policies} I(c 3 )={policy, policies} I(c 4 )={policy, policies, policied, policying}Using a large monolingual English corpus C, word forms policies and policy will be easily found; the other inflections (policie, policiess, policied and policying) will not be found. To simplify the example, assume that Unusual C (c n ) = ∅ for all the candidates; the resulting scores will be: Score(c 1 )=0.71, Score(c 2 )=0.71, Score(c 3 )=1.41, Score(c 4 )=1.Finally, the best candidate is chosen from L by querying the user about a reduced set of the inflections for some of the candidate paradigms c n ∈ L. To do so, our system firstly sorts L in descending order by Score(c n ). Then, users are asked to confirm whether some of the inflections in each expansion are morphologically correct (more precisely, whether they exist in the language); the only possible answer for these questions is yes or no. In this way, when an inflected word form w is presented to the user• if it is accepted, all c n ∈ L for which w / ∈ I(c n ) are removed from L;• if it is rejected, all c n ∈ L for which w ∈ I(c n ) are removed from L.Note that c 1 , the best stem/paradigm pair according to Score, may change after updating L. Questions are asked to the user until only one single candidate remains in L. In order to ask as few questions as possible, the word forms shown to the user are carefully selected. Let G(w , L) be a function giving the number of c n ∈ L for which w ∈ I(c n ). We use the value of G(w , L) in two different phases: confirmation and discarding.Confirmation. In this stage our system tries to find a suitable candidate c n , that is, one for which all the inflections in I(c n ) are morphologically correct. In principle, we may consider that the inflections generated by the best candidate c 1 in the current L (the one with the highest score) are correct. Because of this, the user is asked about the inflection w ∈ I(c n ) with the lowest value for G(w , L), so that, in case it is accepted, a significant part of the paradigms in L are removed from the list. This process is repeated until• only one single candidate remains in L, which is used as the final output of the system; or• all w ∈ I(c 1 ) are generated by all the candidates remaining in L, meaning that c 1 is a suitable candidate, although there still could be more suitable ones in L.If the second situation holds, the system moves on to the discarding stage.Discarding. In this stage, the system has accepted c 1 as a possible solution, but it needs to check whether any of the remaining candidates in L is more suitable. Therefore, the new strategy is to ask the user about those inflections w / ∈ I(c 1 ) with the highest possible value for G(w , L). This process is repeated until• only c 1 remains in L, and it will be used as the final output of the system; or• an inflection w / ∈ I(c 1 ) is accepted, meaning that some of the other candidates is better than c n .If the second situation holds, the system removes c 1 from L and goes back to the confirmation stage.For both confirmation and discarding stages, if there are many inflections with the same value for G(w , L), the system chooses the one with higher Ratio(f ij , p i ), that is, the most usual in C.It is important to remark that this method cannot distinguish between candidates which generate the same set I(c n ). In the experiments, they have considered as a single candidate.In our example, the ordered list of candidates will be L = (c 3 , c 4 , c 1 , c 2 ) . Choosing the inflection in I(c 3 ) with the smaller value for G(w , L) the inflection policy, which is only generated by two candidates, wins. Hopefully, the user will accept it and this will make that c 1 and c 2 be removed from L. At this point, I(c 3 ) ⊂ I(c 4 ), c 3 is suitable and, consequently, the system will try to discard c 4 . Querying the user about any of the inflections in I(c 4 ) which is not present in I(c 3 ) (policied and policying) and getting user rejection will make the system to remove c 4 from L, confirming c 3 as the most suitable candidate.
2
In this section, we present the details of our model. The overall architecture of our model is shown in Figure 1 . Our model consists of a knowledge-augmented fact encoder, a typed decoder, as well as a grammar-guided evaluator in the reinforcement learning framework. The knowledge-augmented fact encoder takes the given entities, relations, and corresponding auxiliary knowledge, i.e. entity description and relation domain, as input and learns a knowledge-augmented fact representation. The learned representation is passed to a typed decoder for question generation. For each token the decoder outputs, the evaluator rewards the generated question using the grammatical similarity between it and the groundtruth question. Based on the reward assigned by the evaluator, our encoder-decoder module updates and improves its current generation. … ! " # " ! $ # $ # %&" ! % … '()*!# !+,!--.(/Typed Decoder0 " ) " 0 $ ) $ !(1.10 #!2'1.3( 3#-.('#0 . . . . . . . . . ) " .. . In this paper, we leverage auxiliary knowledge about the input triples to generate questions over a background KB. We assume a collection of triples (i.e. facts) F as input. F consist of two parts E and R, where E = {e 1 , • • • , e n } denotes a set of entities (i.e., subjects or objects) and R = {r 1 , • • • , r n−1 } denotes all the predicates (i.e. relations) connecting these entities. Moreover, e n ∈ E denotes the answer entity. Note that these facts form an answer path as a sequence of entities and relations in the KB which starts from the subject and ends with the answer:e 1 r 1 − → e 2 r 2 − → • • • r n−1 − −− → e n .Given the above definitions, the task of KBQG can be formalized as follows:P (Y |F ) = P (Y |E, R) = |Y | ! t=1 P (y t |y <t , E, R, K).(1)Here K = (D, O) represents auxiliary knowledge, where D = " x 1 , • • • , x n # denotes a set of entity description and O = " o 1 , • • • , o n # denotes the domains (i.e. types) for entities. Y = (y 1 , • • • , y |Y | )is the generated question, and y <t denotes all previously generated question words before time-step t.Contrary to conventional encoders, our model takes as input not only triples but also the corresponding auxiliary knowledge as described above. We design a multi-level encoder to obtain the representation of knowledge-augmented facts. We describe our multi-level encoder below, which consists of entity encoder, relation memory, and knowledge-augmented fact encoder.Facts in F only provide the most pertinent information of entities and relations, which is not sufficient to generate a diverse question, especially when F is small. In this paper, we link each entity in E to its respective Wikidata page, and obtain corresponding auxiliary knowledge, including a brief description, and a domain definition, to enrich the source input. For example, for the entity "LeBron James" in Table 1 , its description and domain is "American Basketball Player" and "human" respectively.We leverage label, description and domain information to represent each entity. Since the label information of an entity e i is a single token, we obtain the label embedding l i ∈ R d from a KB embedding matrix E f ∈ R k×d , where k represents the size of KB vocabulary.Both description and domain information are sequences of words, and we employ a two-layer bidirectional LSTM network to encode them respectively. Given an entity e i , its descriptionX i = " x i 1 , • • • , x i m #is a sequence of words x i j of length m. The BiLSTM encoder calculates the hidden state at time-step t by ht = [ −−−−→ LST M ([x i t ; − → h t−1 ]); ←−−−− LST M ([x i t ; ← − h t−1 ])].We output the hidden state of the final time-step h m as the embedding vector, and obtain the description embeddingx i = [ − → h m ; ← − h m ].The domain embedding o i is calculated in the same way. The entity embedding e i is the concatenation of the label, domain and description embeddings e i = [l i ; x i ; o i ].Relations in a knowledge base are typically organised hierarchically, such as root/people/deceased person/place of death. The global relation encoder exploits this hierarchical structure through an N -ary Tree-LSTM (Tai et al., 2015) to encode these relation. Each LSTM unit in the relation encoder is able to incorporate information from multiple child units and N is the branching factor of the tree. Each unit (indexed by j) contains input and output gates i j and o j , a memory cell c j and hidden state h j . Instead of a single forget gate, the N -ary Tree-LSTM unit contains one forget gate f jk for each child k, k = 1, 2, • • • , N , and the hidden state and memory cell of the k-th child are h jk and c jk respectively. Given the input r j in the N -ary Tree-LSTM, its hidden state is calculated as follows:i j = σ(W (i) r j + N $ l=1 U (i) l h jl + b (i) ), f jk = σ(W (f ) r j + N $ l=1 U (f ) kl h jl + b (f ) ), o j = σ(W (o) r j + N $ l=1 U (o) l h jl + b (o) ), u j = tanh(W (u) r j + N $ l=1 U (u) l h jl + b (u) ), c j = i j ⊙ u j + N $ l=1 f jl ⊙ c jl , h j = o j ⊙ tanh(c j ).Finally, we use the hidden state of each node h j to represent the corresponding relation embeddings r j . In this way, the encoding is performed once and the relation embeddings are updated through backpropagation in the training process.With knowledge-augmented embeddings of all entities and relations, we encode the triples F using a two-layer bidirectional LSTM network with the input sequence (e 1 , r 1 , e 2 , • • • , r n−1 , e n ), where each e i and r j is described in Section 3.2.1 and 3.2.2 respectively. Note that in this paper we use a linear layer to transform embeddings to maintain the consistency of embedding size. Ultimately, we regard the hidden states as semantic representations and obtain entity representation (h 1 , h 3 , h 5 , . . . , h 2n−1 ) and relation representation (h 2 , h 4 , h 6 , . . . , h 2n ). The last hidden state of BiLSTM is the knowledgeaugmented fact representation F, which is fed into our decoder for question generation.In order to generate questions that are consistent with the input subgraph, inspired by previous work (Du et al., 2017) , we employ a typed decoder based on LSTM to calculate type-specific word probability distributions, which assumes that each word has a latent type of the set {interrogative, entity word, relation word, ordinary words}. In conjunction, we employ a conditional copy mechanism to allow copying from either the entity input or the relation input.At the t-th time-step, our decoder reads the generated word embedding y t−1 and the hidden state s t−1 of the previous time step to generate the current hidden state by s t = LST M (s t−1 , y t−1 ). Note that since the first token of the generated question is interrogative, which is vital for the semantic-consistency of the generated question, we use the answer embedding, instead of the special start-of-sequence token <SOS> embedding, at the first time step of the decoder. The answer embedding is the embedding of entity e n , which is obtained in the entity encoder and contains label, description, and domain information.With an explicit answer embedding, the generated interrogative is more accurate, thus alleviates the semantic drift problem.For conditional copy from entity and relation source inputs, we leverage a gated attention mechanism to jointly attend to the entity representation and the relation representation. For entity representation (h 1 , h 3 , h 5 , . . . , h 2n−1 ), the entity context vector c e t is calculated by the attention mechanism: α e t,i =exp(u ⊤ t Wαh e i ) ! j exp(u ⊤ t Wαh e j ), c e t = % T i=1 α e t,i h e i , where W α is a trainable weight parameter. Similarly, the relation context vector c r t can be obtained from the relation representation. Then a gating mechanism is used to control the information flow from these two sources:EQUATIONGenerally, the predicted probability distribution over the vocabulary V is calculated as:P V = sof tmax(W V u t + b V ), where W V and b V are parameters.Different from the conventional decoder, our typed-decoder calculates type-specific generation distributions. Having generated the interrogative, the word types only include {entity word, relation word, ordinary words} in the following decoding steps. We first estimate a type distribution over word types and decide to copy or generate words according to the word type. If the word belongs to entity or relation, we copy this token from the input entity source or relation source. If the word is ordinary, we calculate type-specific generation distributions over the whole vocabulary. Finally, the generation probability is a mixture of type-specific generation/copy distributions where the coefficients are type probabilities.We reuse the attention score α e t,i and α r t,i to derive the copy probability over entities and relations:EQUATIONThe final generation distribution P (y t |y <t , F, K) from which a word can be sampled, is computed by:EQUATIONHere τ yt is the word type at time-step t and g i is a word type among the three word types {g e , g r , g o }.Each word can be any of the three types, but with different probabilities given the current context. The probability distribution over three word types is calculated by: P (τ yt |y <t , F, K) = sof tmax(W 0 s t + b 0 ), where W 0 ∈ R 3×d , and d is the dimension of the hidden state. The type-specific probability distribution is computed as: P (y t |τ yt = g e , y <t , F, K) = P E , P (y t |τ yt = g r , y <t , F, K) = P R , P (y t |τ yt = g o , y <t , F, K) = P V .(5)We employ a reinforcement learning framework to fine-tune the parameters of the encoder-decoder module by optimizing task-specific reward functions through policy gradient in the evaluator. Previous works directly use the final evaluation metrics BLEU, GLEU, ROUGE-L (Du et al., 2017; Kumar et al., 2019b) as rewards. Kumar et al. (2019b) also proposed the question sentence overlap score (QSS), which is the number of common n-grams between predicted question and the source sentence, as a reward function. Consequently, these methods tend to reward generated questions with large n-gram overlaps with the ground-truth question or the source context, thus may result in the generation of highly similar but unvaried questions. Therefore, we present a new reward function that is specifically designed to improve the variety of generated questions.DPTS Reward. Dependency Parse Tree (DPT) provides a grammatical structure for a sentence by annotating edges with dependency types. We propose DPTS, Dependency Parse Tree Similarity, between the generated question and the ground-truth question as our reward function. DPTS encourages the generation of syntactically and semantically valid questions and further improve the diversity of generated questions, as it is not defined over n-gram overlapping. To calculate DPTS, we leverage the ACVT (Attention Constituency Vector Tree) kernel (Quan et al., 2019) to efficiently calculate similarity based on the number of common substructures between two trees. To apply the DPTS reward, we employ the self-critical sequence training (SCST) algorithm (Rennie et al., 2017) . At each training iteration, the model generates two output sequences: the sampled output Y s , in which each word y s t is sampled according to the likelihood P (y t |y <t , E, R, K) predicted by the generator, and the baseline outputŶ , obtained by greedy search. r(Y ) denotes the DPTS reward of an output sequence Y , and the loss function is defined as:L rl = (r(Ŷ ) − r(Y s )) % t log P (y s t |y s <t , E, R, K).Apart from the loss in the evaluator, we adopt the negative log-likelihood loss function, and apply supervision on the mixture weights of word types.EQUATIONwhereŷ t is the reference word and τŷ t is the reference word type at time t. The overall loss function is defined as: L = L cl + αL wl + βL rl , where α and β are two factors to balance the three loss terms.
2
This section first introduces the idea of unsupervised detection of bilingual URL pairing patterns ( §2.1) and then continues to formulate the use of the detected patterns to explore more websites, including deep webpages ( §2.2), and those not included in our initial website list ( §2.3).Our current research is conducted on top of the re-implementation of the intelligent web agent to automatically identify bilingual URL pairing patterns as described in Kit and Ng (2007) . The underlying assumption for this approach is that rather than random matching, parallel webpages have static pairing patterns assigned by web masters for engineering purpose and these patterns are put in use to match as many pairs of URLs as possible within the same domain. Given a URL u from the set U of URLs of the same domain, the web agent goes through the set U−{u} of all other URLs and finds among them all those that differ from u by a single token 1 -a token is naturally separated by a special set of characters including slash /, dot ., hyphen -, and underscore in a URL. Then, the single-token difference of a candidate URL pairs is taken as a candidate of URL paring pattern, and all candidate patterns are put in competition against each other in a way to allow a stronger one (that matches more candidate URL pairs) to win over a weaker one (that matches fewer). For instance, the candidate pattern <en, zh> can be detected from the following candidate URL pair: www.legco.gov.hk/yr99-00/en/fc/esc/e0.htm www.legco.gov.hk/yr99-00/zh/fc/esc/e0.htmThe re-implementation has achieved a number of improvements on the original algorithm through re-engineering, including the following major ones.1. It is enhanced from token-based to characterbased URL matching. Thus, more general patterns, such as <e, c>, can be aggregated from a number of weaker ones like <1e, 1c>, <2e, 2c>, ..., etc., many of which may otherwise fail to survive the competition. 2. The original algorithm is speeded up from O(|U | 2 ) to O(|U |) time, by building inverted indices for URLs and establishing constant lookup time for shortest matching URL strings. 2 3. The language detection component has been expanded from bilingual to multi-lingual and hence had the capacity to practically handle multilingual websites such as those from EU and UN.When detected URL patterns are used to match URLs in a web domain for identifying bilingual webpages, noisy patterns (most of which are presumably weak keys) would better be filtered out. A straightforward strategy to do this is by thresholding the credibility of a pattern, which can be defined asC(p, w) = N (p, w) |w| .where N (p, w) is the number of webpages matched into pairs by pattern p within website w, and |w| the size of w in number of webpages. Note that this is the local credibility of a key with respect to a certain website w. Empirically, Kit and Ng (2007) set a threshold of 0.1 to rule out weak noisy keys. Some patterns happen to generalize across domains. The global credibility of such a pattern p is thus computed by summing over all websites involved, in a way that each webpage matched by p is counted in respect to the local credibility of p in the respective website:C(p) = ∑ w C(p, w) N (p, w).Interestingly, it is observed that many weak keys ruled out by the threshold 0.1 are in fact good patterns with a nice global credibility value. In practice, it is important to "rescue" a local weak key with strong global credibility. A common practice is to do it straightforwardly with a global credibility threshold, e.g., C(p) > 500 as for the current work.Finally, the bilingual credibility of a website is defined asC(w) = max p C(p, w).It will be used to measure the bilingual degree of a website in a later phase of our work, for which an assumption is that bilingual websites tend to link with other bilingual websites.Some websites contain webpages that cannot be crawled by search engines. These webpages do not "exist" until they are created dynamically as the result of a specific search, mostly triggered by JavaScript or Flash actions. This kind of webpages as a whole is called deep web. Specifically, we are interested in the case where webpages in one language are visible but their counterparts in the other language are hidden. A very chance that we may have to unearth these deep hidden webpages is that their URLs follow some common naming conventions for convenience of pairing with their visible counterparts.Thus for each of those URLs still missing a paired URL after the URL matching using our bilingual URL pattern collection, a candidate URL will be automatically generated with each applicable pattern in the collection for a trial to access its possibly hidden counterpart. If found, then mark them as a candidate pair. For example, the pattern <english, tc chi> is found applicable to the first URL in Table 1 and accordingly generates the second as a candidate link to its English counterpart, which turns out to be a valid page.Starting with a seed bilingual website list of size N , bilingual URL pairing patterns are first mined, and then used to reach out for other bilingual websites. The assumption for this phase of work is that bilingual websites are more likely to be referenced by other bilingual websites. Accordingly, a weighted version of PageRank is formulated for prediction. Firstly, outgoing links and PageRank are used as baselines. Linkout(w) is the total number of outgoing links from website w, and the PageRank of w is defined as (Brin and Page, 1998) :PageRank(w) = r N +(1−r) ∑ w∈M (w) PageRank(w) Linkout(w) ,where M (w) is the set of websites that link to w in the seed set of N bilingual websites, and r ∈ [0, 1] a damping factor empirically set to 0.15. Initially, the PageRank value of w is 1. In order to reduce time and space cost, both Linkout(w) and PageRank(w) are computed only in terms of the relationship of bilingual websites in the seed set. The WeightedPageRank(w) is defined as the PageRank(w) weighted by w's credibility C(w). To reach out for a related website s outside the initial seed set of websites, our approach first finds the set R(s) of seed websites that have outgoing links to s, and then computes the sum of these three values over each outgoing link, namely, ∑ w Linkout(w), ∑ w PageRank(w), and ∑ w WeightedPageRank(w) for each w ∈ R(s), for the purpose of measuring how "likely" s is bilingual. An illustration of link relationship of this kind is presented in Figure 1 .In practice, the exploration of related websites can be combined with bilingual URL pattern detection to literately harvest both bilingual websites and URL patterns, e.g., through the following procedure:1. Starting from a seed set of websites as the current set, detect bilingual URL patterns and then use them to identify their bilingual webpages. 2. Select the top K linked websites from the seed set according to either ∑ Linkout, ∑ PageRank, or ∑ WeightedPageRank.(1) http://www.fehd.gov.hk/tc chi/LLB web/cagenda 20070904.htm (2) http://www.fehd.gov.hk/english/LLB web/cagenda 20070904.htm 3. Add the top K selected websites to the current set, and repeat the above steps for desired iterations.
2
This study presents an off-line corpus analysis to determine when or where humans recognize a DR as they process words incrementally. To this end, we want a human subject to identify the cues within the component clauses/sentences that trigger the recognition of a given DR, such as the underlined tokens in Example (1).Although the exact annotated resource is not yet available, we obtained such annotation by converting the annotation in the RST Signaling Corpus (Das et al., 2015) .Data The RST Signaling Corpus consists of annotation of discourse signals over the RST Discourse Treebank (Carlson et al., 2002) , which is a discourse annotated resource following the Rhetorical Structure Theory (RTS) (Mann and Thompson, 1988) . In the RST Discourse Treebank, a DR is annotated between two consecutive discourse units. In turn, in the RST Signaling Corpus, each DR is further labeled with one or more types of signaling strategy. These signals not only include explicit discourse markers but also other features typically used in automatic implicit relation identification and psycholinguistic research, such as reference, lexical, semantic, syntactic, graphical and genre features (Das and Taboada, 2017) . For example, the temporal relation in Example (A) is annotated with three signal labels in the RST Signaling Corpus: 1(1) discourse marker (now)(2) tense (past -present, future )(3) lexical chain (first year -next year)Only 7% of the relations are annotated as 'implicit'. Therefore, most conventionally 'implicit' relations are also annotated with explicit signals and included in the present analysis.Locating signal positions Based on these labels, we use heuristic rules (see appendix) and gold syntactic annotation 2 to identify the actual cue words in the text. For example, based on the above 3 signal labels, we identify the underlined tokens in Example (1). Manual check on 200 random samples shows that all signal tokens are perfectly tagged in 95% of the samples, and the remaining 5% samples are partially correct.We focus on relations that are signaled by surface tokens in order to examine word-level incrementality in discourse processing. Thus, we do not consider signals that are not associated with particular words, e.g. genre, and relations with annotations that are not specific enough. 4, 146 relations are screened 3 and 15, 977 relations are included in the analysis. The distribution of the DRs under analysis is shown in Table 1 . 1 The list of DR signals and the relation between the RST Treebank and the RST Signaling Corpus can be found in the appendix. Details can be found in the related literature.2 provided by the Penn Treebank, which annotates on the same text as the RST Treebank (Marcus et al., 1993) 3 List of excluded signals are shown in the appendix. Relating signal positions to incremental processing We analyze the positions of the cue tokens in relation to the DRs they signal. Each cue position is represented by its distance from the boundary of the relation's discourse units. The boundary is defined as the first word of the second clause/sentence in the relation, as each relation is annotated between two consecutive clauses/sentences in the RST formalism. 4 For example, the cue words eliminated and now in Example (1) have distances of −4 and 0, respectively. Although positions of the discourse cues can be identified from the recovered annotation, it is still unclear how informative the discourse cues are. It is possible that unambiguous cues only occur at the end even though numerous cues occur in the beginning. For example, in Example 1, can people correctly anticipate the temporal relation after reading the word now? Or is now too ambiguous that it is necessary to consider all signals after reading the last word? To answer these questions, we quantify and compare the discourse informativeness of prefixes in different sizes.The informativeness of each prefix is calculated from the cues covered by the prefix. For each DR spanning two consecutive clauses/sentences, the prefix size ranges from the first word of the first clause/sentence to the complete first and second clauses/sentences. Consecutive cue tokens are merged as one signal and a signal is counted as being covered by a prefix only if the last token of the signal string 5 is covered by the prefix. We use majority as a baseline approach to associate the discourse signals with the relation sense. The inferred relation sense r pn based on the majority cues in discourse prefix p n is defined as:EQUATIONwhere R is the set of all relation senses; S pn is the set of signal strings covered in discourse prefix p n ; n is the distance of the last word of p n ; and count(s, r) is the count of string s being identified as a signal for a DR of sense r in the corpus. The most frequent relation, elaboration, is assigned if no signals are found in the prefix. The relation senses inferred from prefixes of various sizes are compared with the actual relation sense. Although the majority approach does not model inter-relation and ambiguity of the signals, we assume that more signals, and thus longer prefixes, give better or the same prediction 6 . Therefore, we can compare the informativeness of the prefixes with that of the whole discourse span as upper bound.
2
Given a question based on an article, usually a small portion of article is needed to answer the concerned question. Hence it is not fruitful to give the entire article as input to the neural network. To select the most relevant paragraph in the article, we take both the question and the options into consideration instead of taking just the question into account for the same. The rationale behind this approach is to get the most relevant paragraphs in cases where the question is very general in nature. For example, consider that the article is about the topic carbon and the question is "Which of the following statements is true about carbon?". In such a scenario, it is not possible to choose the most relevant paragraph by just looking at the question. We select the most relevant paragraph by word2vec based query expansion (Kuzi et al., 2016) followed by tf-idf score (Foundation, 2011).We use word embeddings (Mikolov et al., 2013) to encode the words present in question, option and the most relevant paragraph. As a result, each word is assigned a fixed d-dimensional representation. The proposed model architecture is shown in Figure 1 . Let q, o i denote the word embeddings of words present in the question and the i th option respectively. Thus, q ∈ R d×lq and o i ∈ R d×lo where l q and l o represent the number of words in the question and option respectively. The question-option tuple (q, o i ) is embedded using Convolutional Neural Network (CNN) with a convolution layer followed average pooling. The convolution layer has three types of filters of sizes f j ×d ∀j = 1, 2, 3 with size of output channel of k. Each filter type j produces a feature map of shape (l q + l o − f j + 1) × k which is average pooled to generate a k-dimensional vector. The three kdimensional vectors are concatenated to form 3kdimensional vector. Note that Kim (2014) used max pooling but we use average pooling to ensure different embedding for different question-option tuples. Hence,h i = CN N ([q; o i ]) ∀i = 1, 2, .., n q (1)where n q is the number of options, h i is the output of CNN and [q; o i ] denotes the concatenation of q and o i i.e. [q; o i ] ∈ R d×(lq+l 0 ) . The sentences in the most relevant paragraph are embedded using the same CNN. Let s j denote the word embeddings of words present in the j th sentence i.e. s j ∈ R d×ls where l s is the number of words in the sentence. Then,d j = CN N (s j ) ∀j = 1, 2, .., n sents (2)where n sents is the number of sentences in the most relevant paragraph and d j is the output of CNN. The rationale behind using the same CNN for embedding question-option tuple and sentences in the most relevant paragraph is to ensure similar embeddings for similar questionoption tuple and sentences. Next, we use h i to attend on the sentence embeddings. Formally,a ij = h i • d j ||h i ||.||d j ||(3)EQUATIONEQUATIONwhere ||.|| signifies the l 2 norm, exp(x) = e x and h i • d j is the dot product between the two vectors. Since a ij is the cosine similarity between h i and d j , the attention weights r ij give more weighting to those sentences which are more relevant to the question. The attended vector m i can be thought of as the evidence in favor of the i th option. Hence, to give a score to the i th option, we take the cosine similarity between h i and m i i.e.EQUATIONFinally, the scores are normalized using softmax to get the final probability distribution.EQUATIONwhere p i denotes the probability for the i th option.We refer to options like none of the above, two of the above, all of the above, both (a) and (b) as forbidden options. During training, the questionsCNN h 1 (q, o 1 ) CNN h 2 (q, o 2 ) CNN h n q (q, o nq ) CNN d 1 (s 1 ) CNN d n sents (s n sents ) Attention Layer Score Calculation Final ProbabilityDistribution m 1 m 2 m nqFigure 1: Architecture of our proposed model. Attention layer attends on sentence embeddings d j 's using question-option tuple embeddings h i 's. Score Calculation layer calculates the cosine similarity between m i and h i which is passed through softmax to get the final probability distribution.having a forbidden option as the correct option were not considered. Furthermore, if a question had a forbidden option, that particular questionoption tuple was not taken into consideration. Let S = [score i ∀i | i th option not in forbidden options] and |S| = k. During prediction, the questions having one of the forbidden options as an option are dealt with as follows:1. Questions with none of the above/ all of the above option: If the max(S) − min(S) < threshold then the final option is the concerned forbidden option.Else, the final option is argmax(p i ).2. Questions with two of the above option: If the S (k) −S (k−1) < threshold where S (n) denotes the n th order statistic, then the final option is the concerned forbidden option. Else, the final option is argmax(p i ).3. Questions with both (a) and (b) type option: For these type of questions, let the corresponding scores for the two options be score i 1 and score i 2 . If the |score i 1 − score i 2 | < threshold then the final option is the concerned forbidden option.Else, the final option is argmax(p i ).We tried two different CNN models, one having f j 's equal to 3,4,5 and other having f j 's equal to 2,3,4. We refer to two models as CN N 3,4,5 and CN N 2,3,4 respectively. The values of hyperparameters used are: d = 300, k = 100. The other hyperparamters vary from dataset to dataset. Since the number of options vary from question to question, our model generates the probability distribution over the set of available options. Similarly, the number of sentences in the most relevant paragraph can vary from question to question, so we set a ij = −∞ whenever d j was a zero vector. Cross entropy loss function was minimized during training.
2
The methodology for automatic term extraction as implemented by the Saffron system consists of the following steps 1. Part-of-speech tagging is applied to the text corpus. From this it can be seen the key languagedependent elements are: part-of-speech tagging, term normalization and the inclusion of a background corpus for some of the metrics. We will explain how we adapted this procedure to Irish.Irish morphology is noticeably more complex than that of English and this presents a challenge for processing the language that should generally require more resources. For automatic term recognition it is not in general necessary to consider verbs as they do not generally occur in terms, which in the context of Irish is beneficial as verbal morphology is more complex than nominal morphology. On the other hand, verbal morphology is generally regular in Irish, whereas nominal morphology is mostly irregular with plural and genitive forms not generally being predictable from the lemma. As such, the only high accuracy approach to handling Irish nominal morphology is a dictionary approach and for this we used the Pota Focal dictionary (Měchura, 2018) , as it provides an easy to parse XML version of the morphology for the basic vocabulary of the language. In total there are 4,245 lemmas (of which 3,488 are nouns) in Pota Focal, which we used in this work. 7n-ollscoile (2) n-ollscoileanna (41) t-ollscoil (0)* t-ollscoile (0)* t-ollscoileanna (0)* However, a particular challenge with Irish (along with other Celtic languages) is initial mutation, that is the changing of initial consonant by lenition, eclipsis or prefixing of a consonant to a word starting with a vowel. We used hard-coded rules to generate the forms of each word with initial mutation as they were not included in Pota Focal directly, but could be easily and systematically derived. We over-generate forms including applying a t-prefix to feminine nouns such as 'ollscoil', on the principle that it is unlikely that we will generate any errors from recognizing too many forms of the noun. An example of all the forms is given in Table 1 and we give the frequency of each form in the New Corpus for Ireland (Kilgarriff et al., 2006) , showing that all forms do occur in text, even those that may be considered ungrammatical. The morphology engine is then implemented by a simple lookup. The most important step for the creation of the tool is the identification of terms from the text and this is achieved in English by means of a regular expression over the output of a part-of-speech tagger. For adapting this to Irish, there is the obvious challenge that there is much less available training data for a part-of-speech tagger and secondly that the part-of-speech tagset would naturally differ from that of English, as for example there is no tag for genitive noun in English. To our knowledge there are two part-of-speech corpora available for Irish of sufficient size to apply machine learning techniques. The first one is from Uí Dhonnchadha and van Genabith (2006) and this corpus consists of the annotation of a number of documents, while a more recent corpus is due to Lynn et al. (2015) and this was created on Twitter by annotating a number of tweets. The basic statistics of the two corpora are given in Table 2 , and we can see that both corpora are similar in size (number of words) but there are differences in the number of documents due to the nature of the annotation as in the case of Lynn's corpus each tweet is considered a single document. Uí Dhonnchadha's corpus has more detailed part-of-speech types, however for the purpose of this work we consider only the top category part-of-speechs (e.g., 'noun', 'verb'). In order to adapt our ATR system to this task we further aligned the two corpora to use a single partof-speech tagging using the following categories: Noun, Verb, Adjective, Adverb, Preposition, Conjunction, Pronoun, Particle, Determiner and demonstrative 4 , Numeral and Other 5 . Further, we considered verbal nouns as verbs as we do not wish them to be extracted as terms, however we note that this could cause issues as there are many cases where there would be ambiguity between nouns and verbal nouns, for example 'aistriú' means 'translation' as a noun, but 'moving' or 'translating' as a verbal noun. We expect that the original corpora have made this distinction consistently so as to enable ATR, but this is certainly an aspect that deserves further investigation. As such we can use the following regular expression to identify terms in the text: N((N|A|D) * (N|A)+)?Note that this expression allows an article to occur in the middle of a term, which is quite common in Irish, for example in 'Banc na hÉireaan' (Bank of Ireland). In addition, we observe that it is common for terms in Irish to either start with an article, for example 'An Fhrainc' (France) or contain a preposition, such as 'aistriú focal ar fhocal' (translating word by word), however initial experiments suggested that including prepositions in the pattern lead to too many false positive terms.While the part-of-speech tagging approach described above has been successful in English and our results show that it is an effective method also for Irish, there are some clear shortcomings of the approach. In particular, the corpora we train on are quite small and as such there is a necessity to make trade-offs for part-of-speech tags that rarely occur within a term. As an alternative, we considered the use of a large database on known terms which exists in the form of the Tearma database. As such we attempted to train a model that could work at identifying terms in context. To achieve this we collected a large corpus of Irish from the Irish Wikipedia, which was selected due to its size and availability but also due to its technical nature meaning that it is likely to contain the terms used in a similar manner to the Tearma database. We used the dump from April 2019 and in total we extracted 10,074 articles totalling 4,093,665 words and we identified all terms from the Tearma database that occur in this corpus of which we found 24,038 terms. We trained our tagging model based on a simple IOB tagging (Ramshaw and Marcus, 1999) where a word was tagged as B if it was first word from a term, I if it occurred in a non-initial position in term and O and if it was not in a term in the Tearma database. This naturally leads to a large number of false negatives as many terms that are used in An Vicipéid are not in Tearma, more concerningly we also found a large number of false positives as there were terms in the database that were similar to other common words. An example of this was 'IS', which is an abbreviation for 'Intleacht Shaorga' (Artificial Intelligence), but also matched a very common form of the copula. As such we also filtered the term database as follows:• If the term occurred more than 3,000 times (this value was hand-tuned) in the corpus it was rejected,• If the term occurred more than 100 times in the corpus it was accepted only if the first word was marked as a noun in Pota Focal,• If the term occurred less than 100 times it was accepted as a term.We also converted the corpora of Uí Dhonnchadha and Lynn to the IOB format so that we could compare the result.The goal of the previous task was to identify candidate terms from the text, and the next step is normally to provide a ranking of these terms so that those which are most relevant to the domain can be identified. A first step is then to provide some basic filters to remove some incorrect terms. In particular, we do the following:• Filter by the length of the term (up to a maximum of 4 words)• Remove all terms that consist solely of stopwords 6 .• Has a minimum number of occurrences in the corpus. However, given the size of the corpus we had, this number was set to 1, and so effectively this filter was ignoredWe then carried out the scoring of each term according to multiple metrics, this has been shown in previous work (Astrakhantsev, 2018) to be very effective and allows the method to be adjusted to the task. To this extent, we consider a corpus, C, and consider t ∈ C to a term extracted in the first step. Then, we develop a number of functions f i : T → R that produce a score for this.We can broadly group the ranking categories into four categories:These methods consider as primary evidence the frequency and distribution of the words, in particular focusing on words that are prevalent in only a few documents in the corpus. We define as usual a set of documents, D, and for each word a frequency across all documents denoted, tf (w). We can then define document frequency, df (w), as the number of documents, d ∈ D, where the word occurs at least once. We can then define the following basic metrics:Total TF-IDF is a well-established method for estimating the importance of a term based on how frequently occurs but penalizing terms that occur uniformly across the corpus.Total TF-IDF(w) = tf (w) log |D| df (w)Residual IDF (Church and Gale, 1995) compares the distribution of TF-IDF against an expectancy of it being randomly distributed.Residual IDF(w) = tf (w)× log 2 1 − exp tf (w) |D| − log 2 df (w) |D|These functions incorporate the distributional hypothesis (Harris, 1954) , by including information about how terms occur within other terms. For this we define T sub (w) as the set of terms which are contained in w, that is all sub-sequences of the words of w and T super (w) as all terms that contain w occurring in the corpus. We can then defined the following metrics:Combo Basic (Astrakhantsev, 2015) uses the count of both the super-and subterms as well as the length (in words) of the term, |w|:ComboBasic(w) = |w|tf (w)+ α|T super (w)| + β|T sub (w)|Similarly, cValue (Ananiadou, 1994) uses the subterm frequency as well:cValue(w) = log 2 (|w| + 0.1)× tf (w) − t ∈T sub (w) tf (t ) |T sub (w)|The domain coherence measures the correlation, using probabilistic mutual information, of the term with other words in the corpus and then uses this to predict a score, in particular we use the Pos-tRankDC method (Buitelaar et al., 2013) .Another important distinguishing factor about terms is that they are very frequent in their domain but not widely used outside that domain. We do measure this by taking a background corpus with term frequencies given as tf ref (w), let T = t f (w) be the total size in words in the foreground corpus and T r ef be the total total size of the background corpus. We can define Weirdness (Ahmad et al., 1999) as:Weirdness(w) = tf (w) tf ref (w)And a second metric Relevance (Peñas et al., 2001) as:Relevance(w) = 1− log 2 + tf (w)T ref df (w) tf ref wT |D|Finally, the use of topic models has been suggested based on the success of Latent Dirichlet Allocation (Blei et al., 2003) in the form of the Novel Topic Model (NTM) (Li et al., 2013 ), although we did not in fact use this metric, as our previous experiments have shown it to perform poorly. NTM requires a probability distribution of a word being labelled to one of K topics, p(w i = w|z i = k), the score is then calculated asNTM(w) = tf (w) v∈w max k P (w i = w|z i = k)Once all the scores for all candidate terms have been calculated, a ranking of the top terms is necessary. In general, these terms produce very different scores and as such, methodologies such as linear models (e.g., support vector machines) or simple classifiers (e.g., feed-forward neural networks) would not work well and would require significant training data. Instead, we have observed that the use of the unsupervised methods of mean reciprocal rank produces a very strong result without the need for training. For this we produce from each score a ranking function R i : T → N that produces the rank (from 1) of the score and then calculate the final score as:EQUATIONFor our experiments we used a combination of metrics that has proven to work well across many settings that consist of the five scores: ComboBasic, Weirdness, TF-IDF, cValue and Residual IDF. Then we apply a filtering step to select the top n candidates; for our experiments we set n = 100. In order to evaluate this approach we manually annotated a small section of the Wikipedia corpus. In total we annotated 11 documents consisting of 5,178 words and found among those 846 terms. This annotation was carried out by a single annotator and while this makes it difficult to estimate the quality of the annotation, this is unfortunately a typical issue with developing resources for underresourced languages. In Table 3 , we see the proportion of words marked with the IOB schema and see that the corpus of Lynn is most similar in terms of composition of the corpus. Moreover, we see that the distant supervision by Tearma while producing a similar ratio of terms, has far fewer words marked as I, suggesting that there are more oneword terms in this corpus than the part-of-speech tagging based corpora. An example of this annotation is given in Figure 1 .
2
Our approach relies on the generation of two views A and B of samples. To this end, augmentations are generated in embedding space for each sample x i in batch X. Batches are created from samples of setD = {(x i )} N i=1, where N denotes the number of sample (sentences). Augmentations are produced by an encoder f θ , parametrized by θ. The output of the encoder is the embeddings of samples in X denoted as H A ∈ T and H B ∈ T . Here T denotes the embedding space. Next, we let, h i ∈ T denote the associated representation of the sentence. The augmentation embeddings produced per sample are then denoted h A i and h B i . To obtain the different embedding, we leverage a transformer language model as an encoder in combination with varying dropout rates. Specifically, one augmentation is generated with high dropout and one with low dropout. This entails employing different random masks during the encoding phase. The random masks are associated with different ratios, r A and r B , with r A < r B . Integrating the distinct dropout rates into the encoder, we yieldh A i = f θ (x i , r A ) and h B i = f θ (x i , r B ). Given the embeddings, we leverage a joint loss, consisting of two objectives:EQUATIONInput sentenceℒ ! Correlation matrix Projector Encoder ℒ " ! " Projector Encoder Self-Contrastive Divergence Feature Decorrelation > > > > FIGURE 1. Schematic illustration of the proposed approach (best shown in color). Starting from an input sentence (left), two embeddings are produced by varying the dropout-rate in the encoder. Patches within the encoder indicate masking due to dropout. Different dropout rates and resulting embeddings color-coded: low dropout, high dropout.Self-contrastive loss is imposed on the embeddings (center). A projector maps embeddings to to a high-dimensional feature space, where the features are decorrelated (right).Here α ∈ R denotes a hyperparameter and p : T → P is a projector (MLP) parameterized by θ 2 , which maps the embedding to P, with |P| ≫ |T |.The objective of L S is to increase the contrast of the augmented embedding, pushing apart the embeddings h A i and h B i . The objective of L C is to reduce the redundancy and promote invariance w.r.t. augmentation in a high-dimensional space P. See Fig. 1 for a schematic illustration of the method.Self-contrast seeks to create a contrast between the embeddings arising from different dropouts. Hence, L S consists of the cosine similarity of the samples in the batch as:L S = 1 N N i h A i • (h B i ) T ∥h A i ∥∥h B i ∥ −1 (2)2.2 Feature Decorrelation:L C seeks to make the embeddings invariant to augmentation while at the same time reducing the redundancy in feature representation. To this end, the embedding h i is projected up from T to a high-dimensional space P, where decorrelation is performed. To avoid clutter in notation, we let p * i = p(h * i ) and * ∈ {A, B}, denote the augmented embedding vectors of sample x i after applying a projection with p(.). Then, a correlation matrix is computed from the projected embeddings. Its entries C j,k are:EQUATIONHere, p * i,j ∈ R denotes the j th component in the projected embedding vector. Then the loss objective for feature decorrelation is defined as:L C = − j (1 − C jj ) 2 + λ j j̸ =k C 2 jk (4)The first term seeks to achieve augmentation invariance by maximization of the cross-correlation along the diagonal. The second term seeks to reduce redundancy in feature representation by minimizing correlation beyond the diagonal. Given that these objectives are opposing, λ ∈ R is a hyperparameter, controlling the trade-off.
2
In this section, we describe the methodology based on which our system is designed, including the data preparation phase, modelling phase and model evaluation phase.In the shared task, two datasets (Priyadharshini et al., 2022) were provided where one comprises of Tamil sentences while the other comprising of code-mixed Tamil-English sentences. The Tamil dataset comprises of 2,240 sentences for training and 560 sentences for validation. In the code-mixed dataset there are 5,948 training sentences and 1,488 validation sentences. Table 1 shows the distribution of data among different classes before and after combining Tamil and Transliterated dataset.We first removed punctuations present in both the dataset. The datasets comprises of some categories like Transphobic there were only very few sentences corresponding to it. To overcome this data shortage issue we performed transliteration on the code-mixed dataset and we converted the sentences in that dataset also to its corresponding Tamil sentences (Hande et al., 2021) by using ai4bharat-transliteration 3 Python package. Before combining the dataset, we removed all those sentences which fell under the category of not-Tamil and then combined the Tamil dataset with the transliterated dataset ending up with 8,186 sentences which is approximately 4 times the size of the previous dataset. By this the imbalance in the dataset was reduced and we overcame the datashortage as well. Figure 1 depicts the data preparation phase graphically.Transliteration refers to the process of converting a word from one script to another wherein the semantic meaning of the sentence is not changed and the syntactical structure of the target language is strictly followed (Hande et al., 2021) . By this we have increased our data size considerably. For this Transliteration we have used ai4bharattransliteration Python package.In our experimentation, MURIL model outperformed all the other models which we experimented on. For evaluation we considered macro and weighted F1-score.For experimenting with ML models, we created a pipeline where first the text is vectorized by using CountVectorizer and is transformed by TfIdf-Transformer. Once the transformation of the data is completed, it is trained on the following Machine Learning models: LightGBM, Catboost, Ran-domForest, Support Vector Machines classifer and Multinomial Naive Naive Bayes. Of the all models Classes Tamil Dataset Transliterated dataset Combined dataset Counter-speech 149 348 497 Homophobia 35 172 207 Hope-Speech 86 213 299 Misandry 446 830 1276 Misogyny 125 211 336 None-of-the-above 1296 3715 5011 Transphobic 6 157 163 Xenophobia 95 297 392 Table 1 : Distribution of Dataset Figure 1 : Data Preparation phase experimented LightGBM (Ke et al., 2017) outperformed all the other algorithms by having 0.32 macro average f1-score and 0.65 weighted average f1-score followed by Catboost. Therfore we performed hyperparameter tuning on Optuna on Light-GBM where we ended up having 0.36 macro average f1-score and 0.63 weighted average f1-score which was the highest metric of our experiments on traditional ML models.MURIL (Khanuja et al., 2021) is a pretrained bert model created by Google for tasks on Indian languages trained on 17 Indian languages. It was parallely trained on Translated Data and Transliterated Data. Based on the XTREME (Hu et al., 2020) benchmark, MURIL outperformed mBERT for all the languages in all standard downstream tasks. Hence, this model handles translated and transliterated data very well. We fine-tuned the MURIL model with the parameters listed in the Table 3 . The metric we obtained from MURIL showed us that it outperformed all other ML models.
2
Of the three tasks examined in this paper, we expect the most marked input effects for syntac- 7 Note that decl and interview represent the intercept for sentence and text type, meaning figures for other types represent deviations from these values. 8 An anonymous reviewer has asked about other genre/type correlations in our data: beyond imp+whow, the more distant second is wh questions in the interview subcorpus: although the coefficient for wh is not significantly collinear in the model, these two category combinations together are responsible for almost 50% of the chi squared residuals for sentence type versus genre (imp+whow: 41.1%, wh+interview: 8.2%). Since imp forms 32.8% of the whow data but only 11.3% of all data, there is some potential for conflation between results for imp in whow and whow as a whole, whereas for interviews, wh is only 6.8% of the data -a very significant proportional deviation from the average of 2.3%, but still modest in absolute terms. tic parsing. Parsing is not only well known to be affected by genre and domain (Lease & Charniak 2005 , Khan et al. 2013 , as well as sentence length (Ravi et al. 2008) , but it is also directly related to sentence type, since the unit of annotation is the sentence, and local problems in a parse can disrupt accuracy throughout each clause.Unlike POS tagging, dependency annotations in GUM represent manually corrected output from the Stanford Parser (see Chen & Manning 2014; V3.5 was used). While the entire corpus was corrected by student annotators, only 4,872 tokens were corrected a second time by an experienced instructor. Although this is a small dataset, we choose to use it rather than the whole corpus both because it is more reliable, and because this allows us to evaluate human errors in the initial correction. Our results for manual annotation therefore apply to the task of parser correction, and not to annotation from scratch.Here too, we consider text and sentence type, but also sentence length, as well as individual document effects. Our null hypothesis is an equal distribution of errors among all partitions. We suspect a stronger effect for sentence length, since long distance dependencies are likelier in long sentences and may be more difficult for humans and automatic parsing, by opening up more opportunities for actual and apparent ambiguities. Sentence type may also have a strong effect, especially for types underrepresented in parser training data (i.e. the Penn Treebank, Marcus et al. 1993 ). This is expected for imperatives and non-canonical clauses, whereas the decl and sub types are expected to perform best. Table 6 gives accuracy by genre and sentence type for dependency label and attachment. The types intj and ger have been dropped, since they were represented by fewer than 10 tokens in the doubly corrected data. Token counts in each partition are included for the remaining categories.As expected, humans improved on the parser in all cases. Genre is only significant for voyage, and only in parser label assignment. More pronounced negative effects can be seen for frag and other, which carry over from parser to manual correction. Smaller effects for the question types can be observed, but are based on few tokens.Although the results confirm the expected good performance on decl and lower importance of genre, imperatives emerge as unproblematic and only frag and other stand out. At the same time, it is possible there are alternative explana-tions for the data, such as sentence length or individual document difficulty. The four mixed-effects models summarized in Table 7 show that while sentence type survives, genre is no longer significant. Moreover, sentence length was disruptive only for humans (in contrast to Ravi et al.' Table 7 : t values from mixed effects models for parsing accuracy using sentence type, genre and length, with document random effects.The most striking sentence type predictor is wh, though it is based on little data. As length has been factored in, these are cases where length is not a sufficient predictor of the observed error rate. Upon closer inspection, wh sentences are shorter overall -about 10 tokens on average -while declaratives are 21 tokens on average but similarly difficult. Both types are dense in the syntactic content that can lead to errors while easy to catch categories, such as trivial modifiers, are more rare -see the dearth of easy modifier functions despite complex syntax in examples (3-5).(3) What analysis did you perform on the specimens and what equipment was used? (4) What are the startup costs involved?(5) Why run for president?The type frag was a strong predictor of error. Many instances of frag in the data were more complex than a simple NP, such as captions for image credit (6), dates (7), NPs with foreign word heads (8) or potentially ambiguous NPs (9), among many other short bits of language with little else available to contextualize them. 6 Coreference resolution
2
Coupling class-based LM (CLM) and curriculum learning, HCP is to gradually anneal class prediction to token prediction during LM training. In this section, we first describe how we instantiate word classes by leveraging hypernym relation from the Figure 2: Hypernym-paths of synsets "magnesium.n.01", "iron.n.01", and "desk.n.01", corresponding to the word magnesium, iron, and desk respectively.def token2class(token2freq, d, f):# token2freq is a dictionary whose key is the token and value is the tokens occurrences) # d is the depth, f is the occurrence threthold rtn = {} for token, freq in token2freq.items ():if freq > f: continue for synset in wordnet.synsets(token):for path in synset.hypernym_paths(): if len(path)>=d and noun in path[d−1]: rtn[token] = path[d−1] break if token in rtn: break return rtnCode 1: Pseudocode for token to class mapping.WordNet. We then present how to incorporate the proposed Hypernym Class Prediction task into LM training via curriculum learning.WordNet (Miller, 1995) is a lexical database that groups words into sets of cognitive synonyms known as synsets, which are in turn organized into a directed graph by various lexical relations including the hypernymy (is-a) relation. As shown in Figure 2 , each vertex is a synset, labeled by the text within the box, and each edge points from the hypernym (supertype) to the hyponym (subtype). Note that a word form (spelling) may be associated with multiple synsets -each corresponding to a different sense of the word, which are sorted by the frequency of the sense estimated from a senseannotated corpus. For example, iron has 6 synsets, among which "iron.n.01" is the most common one.Hence, if two words share the same hypernym at a certain level in their hypernym-paths (to the root in WordNet), we could say they are similar at that level. Here we use "Depth" to quantify the hypernym-path level. In Figure 2 , for example, at Depth 6, iron and magnesium are mapped to the same group named "metallic_element.n.01", while desk is mapped to "instrumentality.n.03". At Depth 2, all these three words share the same (indirect) hypernym "physical_entity.n.01".In this work, we map each token in our training set into its hypernym class if this token (1) has a noun synset in the WordNet, (2) with a hypernympath longer than a given depth d, and 3has frequency below a given threshold f in the training corpus. We only consider nouns because it is not only the most common class in the WordNet but also a difficult class for LMs to learn (Lazaridou et al., 2021) . For tokens with multiple synsets, we iterate over the synsets in the order of sense frequency and break the loop once found. We select the most frequent synset no less than the required depth. The mapping pseudocode is illustrated in Code 1, which is a data pre-processing algorithm conducted only once before the training and takes no more than 5 minutes in our implementation.We first partition the vocabulary into V x and V ¬x based on whether or not a token has a hypernym in the WordNet, and V h denotes the set of all hypernyms. The original task in a Transformer-based LM is then to predict the token w j 's probability with the output x from the last layer:P (y = w j |x) = exp(x T vw j ) w k ∈Vx∪V¬x exp(x T vw k ) (1)where w k is the k th word in the original vocabulary and v w k is its embedding. Here we assume the output layer weights are tied with the input em- beddings. We call any training step predicted with Eq. 1 a token prediction step.To do the Hypernym Class Prediction step, we replace all tokens in V x in a batch of training data with their corresponding hypernym classes in V h . After the replacement, only hypernym classes in V h and tokens in V ¬x can be found in that batch. Then, the LM probability prediction becomes:P (y = w j |x) = exp(x T vw j ) w k ∈V h ∪V¬x exp(x T vw k ) (2)where w j could be either a token or a hypernym class. We called this batch step is a Hypernym Class Prediction (HCP) step.Note that Eq. 2 is different from the multiobjective learning target, where the hypernym class would be predicted separately:P (y = w j |x) = exp(x T vw j ) w k ∈V h exp(x T vw k ) (3)where w j is a hypernym class. We will elaborate on this difference in the experiment results part.We train a LM by switching from HCP to token prediction. For the example in Figure 2 , our target is to teach a model to distinguish whether the next token belongs to the metallic element class or instrumentality class during the earlier stage in training, and to predict the exact word from magnesium, iron, and desk later.Inspired by Bengio et al. (2009) , we choose curriculum learning to achieve this. Curriculum learning usually defines a score function and a pacing function, where the score function maps from a training example to a difficulty score, while the pacing function determines the amount of the easiest/hardest examples that will be added into each epoch. We use a simple scoring function which treats HCP as an easier task than token prediction. Therefore, there is no need to sort all training examples. The pacing function determines whether the current training step is a HCP step, i.e. whether tokens will be substituted with their hypernyms.Our pacing function can be defined as:EQUATIONorEQUATIONwhere P (y = c|t) is the probability that the current step t is a hypernym class prediction step. N is the total training steps. a and b are hyper-parameters. So, Eq. 4 is a constant pacing function in the first a * N steps, while Eq. 5 is a linear decay function. We plot these two functions in Figure 3 . According to our experimental results Tab. 5, these two functions are both effective in improving the language model.
2
The experiment is conducted on native PDF documents. In line with the work presented in FinSBD-2 task by (Giguet and Lejeune, 2021) , we choose to implement an end-to-end pipeline from the PDF file itself to a fully structured document. This approach allows to control the entire process. Titles and Table of Contents that we generate for the shared tasks are derivative outputs of the system.The document content is extracted using the pdf2xml command (Déjean, 2007) . Three useful types of content are extracted from the document: text, vectorial shapes, and images.Pdf2xml introduces the concepts of token, line and block, as three computational text units. We choose to only rely on the "token" unit. In practice, most output tokens correspond to words or numbers but they can also correspond to a concatenation of several interpretable units or to a breakdown of an interpretable unit, depending on character spacing. We choose to redefine our own "line" unit in order to better control the coherence of our hierarchy of graphical units. We abandon the concept of "block" whose empirical foundations are too weak.Using pdf2xml allows to rely on vectorial information during document analysis. Text background, framed content, underline text, table grid are crucial information that contributes to sense making. They simplify the reader's task, and contribute in a positive way to automatic document analysis.Most vectorial shapes are basic closed path, mostly rectangles. Graphical lines or graphical points do not exist: lines as well as points are rectangles interpreted by the cognitive skills of the reader as lines or points. In order to use vectorial information in document analysis, we implemented a preprocessing stage that builds composite vectorial shapes and interprets them as background colors or borders. This preprocessing component returns shapes that are used by our system to detect framed content, table grids, and text background. It improves the detection of titles which are presented as framed text and it avoids considering table headers as titles.Pdf2xml extracts images from the pdf. They may be used in different context such as logos in the title page, figures in the document body. An other interesting feature lies in the fact that certain character symbols are serialized as images, in particular specific item bullets such as arrows or checkboxes. They are indistinguishable from a standard symbol character by the human eye.We choose to handle images as traditional symbol characters, so that they can be exploited by the structuration process, in particular by the list identification module. Identical images are grouped, and a virtual token containing a fake character glyph is created. The bounding box attributes are associated to the token and a fake font name is set. These virtual tokens are inserted at the right location by the line builder module thanks to the character x-y coordinates. This technique significantly improves the detection of list items and, as a consequence, the recognition of the global document structure.Page Layout Analysis (PLA) aims at recognizing and labeling content areas in a page, e.g., text regions, tables, figures, lists, headers, footers. It is the subject of abundant research and articles (Antonacopoulos et al., 2009) .While PLA is often achieved at page scope and aims at bounding content regions, we have taken a model-driven approach at document scope. We try to directly infer Page Layout Models from the whole document and we then try to instantiate them on pages.Our Page Layout Model (PLM) is hierarchical and contains 2 positions at top-level: the margin area and the main content area. The margin area contains two particular position, the header area located at the top, and the footer area located at the bottom. Aside areas may contain particular data such as vertically-oriented text. The main content area contains column areas containing text, figures or tables. Floating areas are defined to receive content external to column area, such as large figures, tables or framed texts.The positions that we try to fill at document scope are header, footer and main columns. First, pages are grouped depending on their size and orientation (i.e., portrait or landscape). Then header area and footer area are detected. Column areas are in the model but due to time constraints, the detection module is not fully implemented in this prototype yet.Header and footer area boundaries are computed from the repetition of similar tokens located at similar positions at the top and at the bottom of contiguous pages (Déjean and Meunier, 2006) . We take into account possible odd and even page layouts. The detection is done on the first twenty pages of the document. While this number is arbitrary, we consider it is enough to make reliable decisions in case of odd and even layouts.A special process detects page numbering and computes the shift between the PDF page numbering and the document page numbering. Page numbering is computed from the repetition of tokens containing decimals and located at similar positions at the top or at the bottom of contiguous pages. These tokens are taken into account when computing header and footer boundaries.The TOC is located in the first pages of the document. It can spread over a limited number of contiguous pages. One formal property is common to all TOCs: the page numbers are right-aligned and form an increasing sequence of integers.These characteristics are fully exploited in the core of our TOC identification process: we consider the pages of the first third of the document as a search space. Then, we select the first right-aligned sequence of lines ending by an integer and that may spread over contiguous pages.Linking Table of Content Entries to main content is one of the most important process when structuring a document (Déjean and Meunier, 2010) . Computing successfully such relations demonstrates the reliability of header detection and permits to set hyperlinks from toc entries to document headers.Once TOC is detected, each TOC Entry is linked to its corresponding page number in the document. This page number is converted to the PDF page number thanks to the page shift (see section 3.2). Then header is searched in the related PDF page. When found, the corresponding line is categorized as header. Unordered lists are also called bulleted lists since the list items are supposed to be marked with bullets. Unordered lists may spread over multiple pages.Unordered list items are searched at page scope. The typographical symbols (glyphs) used to introduce items are not predefined. We infer the symbol by identifying multiple left-aligned lines introduced by the same single-character token. In this way, the algorithm captures various bullet symbols such as squares, white bullets. . . Alphabetical or decimal characters are rejected as possible bullet style type. Images of character symbols are transparently handled thanks to virtual tokens created during the preprocessing stage.The aim of the algorithm is to identify PDF lines which corresponds to new bulleted list item (i.e., list item leading lines). The objective is not to bound list items which cover multiple lines. Indeed, the end of list items are computed while computing paragraph structures: a list item ends when the next list item starts (i.e., same bullet symbol, same indentation) or when less indented text objects starts.Ordered list items are searched at document scope. We first select numbered lines thanks to a set of regular expressions, and we analyse each numbering prefix as a tuple P, S, I, C where P refers to the numbering pattern (string), S refers to the numbering style type (single character), I refers to the numbering count written in numbering style type (single character), and C refers to the decimal value of the numbering count (integer).The To illustrate, the line "A.2.c) My Header" is analysed as A.2.L), L, c, 3 .Lines are grouped in clusters sharing the same numbering pattern. A disambiguation process as-signs an unambiguous style type to ambiguous lines. The underlying strategy is to complement unambiguous yet incomplete series in order to build coherent, ordered series.The aim of paragraph structure induction is to infer paragraph models that are later used to detect paragraph instances. The underlying idea to automatically infer the settings of paragraph styles.Paragraphs are complex objects: a canonical paragraph is made of a leading line, multiple body lines and a trailing line. The leading line can have positive or negative indentation. In context, paragraphs may be visually separated from other objects thanks to above spacing and below spacing.In order to build paragraph models, we first identify reliable paragraph bodies: sequences of three or more lines with same line spacing and compatible left and right coordinates. Then, leading lines and trailing lines are identified considering same line spacing, compatible left and/or right coordinates (to detect left and right alignments), same style. Paragraph lines are categorized as follows: L for leading line, B for body lines, T for trailing line. Header lines are categorized H. Other lines are categorized as ? for undefined.In order to fill paragraph models, paragraph settings are derived from the reliable paragraphs that are detected. When derived, leading lines of unordered and ordered list items are considered to create list item models.Once paragraph models and list item models are built, the models are used to detect less reliable paragraphs and list items (i.e., containing less than three body lines). Compatible models are applied and lines are categorized L, B (if exists) or T (if exists). Remaining undefined lines are categorized considering line-spacing.
2
At the beginning we tried to choose the best classifier for the Arabic Dialects Identification (ADI) task from a set of classifiers provided by WEKA (Hall et al., 2009) by measuring the performance of several classifiers on testing with the training dataset, 10-fold cross-validation, and by percentage split which divides the training set into 60% for training and 40% for testing. Table 2 reports results for a range of classifiers that we tried, using the WEKA StringToWordVector filter with WordTokenizer to extract words as features from utterance-strings. SMO was the best performing classifier. Table 3 shows the results of SMO using CharacterNGram Tokenizer with Max=3 and Min=1. Word Tokenizer method, also known as bag of words, is a filter that converts the utterances into a set of attributes that represents the occurrence of words (delimited by space, comma, etc) from the training set. It is designed to keep the n (which we set to 1000) top words per class. NGramWord Tokenizer is similar to Word Tokenizer with the exception that ability to also include word-sequences with the max and min number of words; while CharacterNGram Tokenizer counts 1-2-and/or 3-character n-grams in the utterance-string.The second column in table 2 shows the results on the same (dialect-labelled) data as those used to train classifier. Third column represents the results on 10-fold cross-validation. The fourth column shows the results on a randomly selected 40% of original training data for test of classifiers trained on the other 60%. After running the experiments in Table 2 , we realised that 10-fold cross-validation is very timeconsuming (at least 10 times the duration of evaluation on training set or 60:40 percentage split) but produces the same classifier ranking, so we did not repeat the 10-fold cross-validation for Table 3 : The accuracy of different classifiers (CharacterNGramTokenizer).Looking at table 2, we noticed that by using SMO we got 6803 utterances correctly classified and 816 utterances misclassified. To improve the identification results we output the misclassified utterances and converted the text from Buckwalter to normal readable Arabic script because looking at the Buckwalter texts is difficult even if you know the Buckwalter transliteration system (Buckwalter, 2002) . Then, we asked our Arabic linguistic experts to examine some of the texts which were misclassified, and try to find features which might correctly predict the dialect. Figure 2 shows example of misclassified utterances. The example shows the instance 4 is actually labelled class 2:GLF but the classifier made an error and predicted class 3:NOR. The Arabic linguistics experts analysed the shortcomings in the misclassified utterances from the training data. They found that numerous texts are too short to say anything about their dialect origins, for example: $Ark is a short one-word text which appears unchanged labelled as different dialects. Some of the utterance seem to be entirely MSA despite having dialect labels, possibly due to the Automatic Speech Recognition method used; and a lot of the utterance have at least some MSA in them. Some utterances that have recognisable dialect words often have words -which are shared between two or more dialects. They even found some utterances labelled as one dialect but evidently containing words not from that dialect; for example in utterance (254) labelled as LAV in the training set contains a non-LAV lexical item, see figure 3. This analysis led us to conclude that it is impossible in principle for WEKA to classify all instances correctly. There is a proportion of texts that cannot be classified, and this sets a ceiling on accuracy that it is possible to achieve approximate to 90-91%.SMO is the WEKA implementation of the Support Vector Machines classifier (SVM) which have been developed for numeric prediction and classifying data by constructing N-dimensional hyper plane to separate data optimally into two categories (Ayodele, 2010). SMV works to find a hypothesis h that reduces the limit between the true error in h will make it on unseen test data and the error on the training data (Joachims, 1998) . SMV achieved best performance in text classification task due to the ability of SVM to remove the need for feature selection which means SVM eliminate a high-dimensional feature spaces resulting from the frequent of occurrence of word wi in text. In addition, SVM automatically find good parameter settings (ibid).Term Frequency represent the frequency of particular word in text (Gebre et al., 2013). Based in our task we found some words usually frequent in one dialect more than other dialects. So we used the weight of TF to indicate the importance of a word in text.Invers Document Frequency tried to scale the weight of frequent words if it appear in different texts (more than one dialects) that is mean a word which appear in many dialect we cannot used as feature (Gebre et al., 2013) .The first experiments to choose the best classifier to identify Arabic dialects showed that SMO is the best machine learning classifier algorithm, but we may increase accuracy by adjusting parameters and features taken into account.The WordTokenizer setting assumes features are words or character-strings between spaces while the CharacterNGramTokenizer assumes features are 1/2/3-character sequences. We used the WEKA StringToWordVector filter with WordTokeniser which splits the text into words between delimiters: (fullstop, comma, semi-colon, colon, parenthesis, question, quotation and exclamation mark). After that, we decided to use SMO, but we suggested trying character n-Grams as units, instead of words as units. We used CharacterNGramTokenizer to splits a string into an n-gram with min and max gram. We tried to set Max and Min both to 1 gives a model based on single characters; max and min both to 2 is a char-bigram model; max and min both to 3 gives us a trigram model; max and min to 4 gives a 4-gram model, table 4 shows the results of different gram values when evaluating with the training set and a 60:40 percentage split of the training set. In addition, to improve performance we tried to replace the dimensions of the feature vector with their IDF and TF weight which is a standard method from Information Retrieval (Robertson, 2004) . We supposed the models were very similar: (3-1) has all the trigrams of (3-3) and also some bigrams and unigrams but these probably are common to all or most dialects and so do not help in discrimination. However, the Task rules stated that we were restricted to trying our three best classifiers, so at this stage we had to choose three "Best" results. Sometimes the training set score is high, but the 60:40 percentage split score is low; and sometimes the 60:40 percentage split score is high but the Training set score is poor. So, we decided to use 60:40 percentage split as our guide to choose the best combination, because using the training set for training as well as evaluation may over-fit to the training set. Figure 4 below shows the chart that summarises the above four tables for different combinations of TF/IDF and WC values with SMO classifier.
2
Given that the transfer learning process described in this study uses the Recursive Neural Tensor Network (RNTN) model proposed by Socher et al. (2013b) as the source model, we make numerous references to the aforementioned model throughout the paper. Therefore, to avoid clutter, from this point onward the model proposed by Socher et al. 2013bis referred as Socher Model in the remainder of this paper.Depending on the size of the corpus (phrases extracted from legal text), availability of human annotators and the time, it is not feasible to analyze and modify the sentiment of every word in a corpus. Therefore, it is required to select the vocabulary (unique words in the corpus) such that the end-model can correctly classify the sentiment of most of the phrases from the legal domain while not squandering human annotator time on words that occur rarely. To this end, first, the stopwords (Lo et al., 2005) are removed from the text by utilizing the classical stop-word list known as the Van stop-list (Van Rijsbergen, 1979) . Next, the term frequencies for each word in the corpus is calculated and only the top 95% words of it are added to the vocabulary.The selected vocabulary (set of individual words) is given to the sentiment annotator Socher Model as input. From the model, sentiment is classified into one of the five classes as in table 3.2. This class scheme made sense for the movie re-views for which the Socher Model is trained and used for. However, in the application of this study, the basic requirement of finding sentiment in court cases in the legal domain is to identify whether a given statement is against the plaintiff's claim or not. Therefore, we define two classes for sentiment: negative and non-negative. Three human judges analyze the selected vocabulary and classify each unique word into the two classes depending on its sentiment separately and independently. If at least two judges agree, the given word's sentiment is assigned as the class those two judges agreed. For the same word, the output from the sentiment annotator Socher Model belongs to one of the five classes mentioned in the preceding subsection. In this approach, we map the output from Socher Model to the two classes we define in For a given word, if the two sentiment values assigned by the Socher Model and human judges do not agree with the above mapping, we define that the Socher Model's output has deviated from its actual sentiment. For example:Sentence: Sam is charged with a crime. Socher Model's output: positive Human judges' annotation: negativeThe word charged has several meanings depending on the context. As the Socher Model was trained using movie reviews, the sentiment of the word charged is identified as positive. Although the sentiment of the term crime is recognized as negative, the sentiment of the whole sentence is output as positive. But in the legal domain, charged refers to a formal accusation. Therefore, the sentiment for the above sentence should have been negative. From the selected vocabulary, all the words with deviated sentiments are identified and listed separately for the further processing.In the preceding subsection, we came across a situation where the sentiment values from the Socher Model do not match the actual sentiment value because of the difference in domains. And there are words like insufficient, which were not recognized by the model because those terms were not included in the training data-set. One approach to solve this is to annotate the phrases extracted from legal case transcripts manually as the Socher Model suggests, which will require a considerable amount of human effort and time. Instead of that, we can change the model such that the desired output can be obtained using the same trained Socher Model without explicitly training using phrases in the legal domain. Hence, this method is called a transfer learning method.In order to change the model, first, it is required to understand the internals of the Socher Model model. When a phrase is provided as input, first it generates a binary tree corresponding to the input in which each leaf node represents a single word. Each leaf node is represented as a vector with d-dimensions. The parent nodes are also d-dimensional vectors which are computed in the bottom-up fashion according to some function g. The function g is composed of a neural tensor layer. Through the training process, the neural tensor layer and the word vectors are adjusted to support the relevant sentiment value. The neural tensor layer corresponds to identify the sentiment according to the structure of words representing the phrase. If we consider a phrase like not guilty ,both individual word elements have negative sentiments. But the composition of those words has the structure of negating a negative sentiment term or phrase. Hence the phrase has a non-negative sentiment. If the input was a phrase like very bad, the neural tensor layer has the ability to identify that the term very increases the negativity in the sentiment.The requirement of the system is to identify the sentiment of a given phrase. The proposed approach is not to modify the neural tensor layer completely. We simply substitute the word vector values of individual words which are having deviated sentiments between Socher Model and human annotation (See sections 3.2). The vectors for the words which were not in the vocabulary of the training set which was used to train the RNTN model should be instantiated. The vectors of the words which are not deviated (according to the definition provided in the preceding subsection 3.3) will remain the same. As the words with deviated sentiments (provided by the Socher Model) in the vocabulary are already known, we initialize the vectors corresponding to the sentiment annotation for those words. Since the model is not trained explicitly, the vector initialization is done by substituting the vectors of words in which sentiment is not deviated comparing the Socher Model output and its actual sentiment. After the substitution is completed, we consider the part-of-speech tag. For that purpose, the part-of-speech tagger mentioned in Toutanova et al. (2003) is used. The substitution of vectors is carried out as shown in Table 2 The number of words which have deviated sentiments is a considerably lower amount compared to the selected vocabulary. The rest of the words' vectors representing sentiments are not changed in the modification process. The neural tensor layer also remains unchanged from the trained Socher Model using movie reviews (Socher et al., 2013b) . When the vectors for words with deviated sentiments are initialized according to the part-ofspeech tag as shown in Table 2 , it is possible to make a fair assumption that when deciding the sentiment with the proposed implementation, it does not harm the structure corresponding to the linguistic features of English. Consider the sentence "evidence is insufficient." as an example.The term "insufficient" is not in the vocabulary of the Socher Model due to the limited vocabulary in training data set. Therefore, the Socher Model provides the sentiment of that word as neutral which indicates as a word with a deviated sentiment. Following the Table 2 , the sentiment related vector is instantiated by substituting the vector of wrong as the part-of-speech tag of insufficient is JJ (Santorini, 1990) . Therefore the modified version of the RNTN model has the capability of identifying the sentiment of the above sentence as negative. The figure 1 shows how the sentiment is induced through the newly instantiated word vector. And there are scenarios where the term is in the vocabulary of the Socher Model but has a different sentiment compared to the legal domain. Consider the sentence "Sam is charged with a crime" which was mentioned in section 3.2, In section 3.2, we have identified that the term charged denotes a different sentiment in legal domain compared to movie reviews. The source RNTN model outputs a positive sentiment for that given sentence as the term charged is identified as having a positive sentiment according to movie reviews domain. And that term is the cause for having such an output from the source model. The figure 2 indicates how the change we introduced in the target model (in section 3.2) induce the correct sentiment up to the root level of the phrase. Therefore, the target model identifies the sentiment correctly for the given phrase. To improve the recall in identifying phrases with negative sentiment, we have added another rule to the classification criteria. The source RNTN model (Socher Model) provides the score for each of the five classes such that all those five scores sum up to 1. If the negative sentiment class has the highest score, the sentiment label of the phrase will be negative. Otherwise, the phrase again can be classified as having a negative sentiment if the score for negative sentiment class is above 0.4. If those two conditions are not met, the phrase will be classified as having a non-negative sentiment. Section 4 provides observations and results regarding the improved criteria.
2
The objective of this work is to identify the troll from multimodal memes. Initially, we exploit the visual aspects of the memes and develop several CNN architectures. Subsequently, the textual information is considered, and deep learning-based methods (i.e., LSTM, CNN, LSTM+CNN) are applied for classification. Finally, the visual and textual features are synergistically combined to make more robust meme classification inferences. Figure 2 depicts the abstract process of the troll meme classification system.In the preprocessing step, unwanted symbols and punctuations are removed from the text automatically using a Python script. The preprocessed text is transformed into a vector of unique numbers. The Keras tokenizer function is utilized to find the mapping of this word to the index. The padding technique is applied to get equal length vectors. Similar to ImageNet's preprocessing method (Deng et al., 2009) , all images are transformed into a size of (224 × 224 × 3) during preprocessing.Several pre-trained CNN architectures including VGG16 (Simonyan and Zisserman, 2014) , VGG19, and ResNet50 (He et al., 2016) are employed here. To accomplish the task, this work utilized the transfer learning approach (Tan et al., 2018) . At first, the top two layers of the models are frozen and then added a global average pooling layer followed by a sigmoid layer for the classification. The models are trained using the 'binary_crossentropy' loss function and 'adam' optimizer with a learning rate of 1e −3 . Training is performed by passing 32 samples at each iteration. Besides, we use the Keras callback method to save the best intermediate model.In order to extract features from the text modality, various deep learning architectures are used. The investigation employs CNN and RNN architectures, specifically CNN and LSTM with CNN (LSTM+CNN). Firstly, the Keras embedding layer generates the word embeddings for a maximum caption length of 1000. Subsequently, these em-beddings are propagated to the models. We construct a CNN model consisting of one convolution layer associated with a filter size of 32 and a ReLU (Rectified Linear Unit) activation function in one architecture. To further downsample the convoluted features, we use a max-pooling layer followed by a classification layer for the prediction. In another architecture, we added a single LSTM layer of 100 neurons at the top of the CNN network and thus created the LSTM + CNN model. Here, the LSTM layer is introduced due to its effectiveness in capturing the long-term dependencies from the long text.Visual features are extracted using the pre-trained VGG16 model. Following the VGG16 model, we added a global average pooling layer with fully connected and sigmoid layers. We employed CNN and LSTM models to extract the textual features. Finally, the output layers of the visual and textual models are concatenated to form a single integrated model. The output prediction is produced in all combinations by a final sigmoid layer inserted after the multimodal concatenation layer. All the models are compiled with the 'binary crossentropy' loss function. Aside from that, we utilize the 'adam' optimizer with a learning rate of 1e −3 and a batch size of 32. Table 2 shows the list of tuned hyperparameters used in the experiment.
2
In this section we introduce the task of predicting code-switching points and describe the base model for it, with a self-explainable architecture as its backbone. We then describe how we incorporate speaker-grounding prompts into the model. Figure 1 : We use a Transformer-based model to predict language switches in dialogues and identify phrase-level features guiding predictions. Here, both speakers are bilingual, but Blue's native language is Spanish and Green's native language is English. They have unique social factors (such as age). The dialogue structure reflects speaker identities and relationships: Green will switch to Spanish with el actor, accommodating Blue's language preference. Using only dialogue context, the baseline (1) fails to pick up on this, while our speakeraware model (2) successfully predicts a code-switch and identifies useful linguistic cues.Let d i = [w 1 , w 2 , . . . , w u ]be an utterance (string of tokens) in the full dialogue D. Given a context window of size h, a model processes a local dialogue context:[d i−h , . . . , d i−1 , d ′ i ], where d ′ i := [w 1 , w 2 , . . . , w b ], b ∈ {1, 2, . . . , u}.In other words, we take the prefix of the current utterance d i up to an index b. Each word w j in the dialogue has a language tag l j associated with it. For the given dialogue context D up to boundary-word w b , a model must predict whether the language of the next word after w b will be code-switched (1), or the same (0). In our setup, a code-switch occurs between two consecutive words w b , w b+1 if the language of w b is English and the language of w b+1 is Spanish (or vice versa). In particular, a word with an ambiguous language, such as the proper noun Maria, cannot be a switch point; only words with unambiguous language tags are switched. This pre-ASH is first speaker, older, female, from Spanish speaking country, between English and Spanish prefers both, rarely switches languages. JAC is second speaker, older, male, from Spanish speaking country, between English and Spanish prefers both, never switches languages.ASH is a middle-aged woman from a Spanish speaking country. Between English and Spanish she prefers both, and she rarely switches languages. ASH speaks first. JAC is a middle-aged man from a Spanish speaking country. Between English and Spanish he prefers both, and he never switches languages. JAC speaks second.Partner ASH, JAC are all middle-aged from a Spanish speaking country. Between English and Spanish they prefer both. ASH is a woman and rarely switches languages. JAC is a man and never switches languages. ASH speaks first. vents us from labeling monolingual utterances as code-switched only because they have an ambiguous term such as a proper noun.Speaker-Aware Grounding Each utterance in the dialogue context has a speaker associated with it. Let the set of all speakers in the dialogue context be S = {s 1 , s 2 , s 3 , . . . , s M }. We define a speaker-aware prompt P = {p 1 , p 2 , p 3 , . . . , p K } as a concatenation of K strings p i , each describing an attribute of a speaker in the dialogue. Together, P describes the unique attributes of all M speakers in the dialogue context. Our proposed speaker-guided models take as input P• D = [p 1 , . . . , p K , d i−w , . . . , d ′ i ], the concatenation of prompts and dialogue context. We encode the inputs with a multilingual Transformerbased architecture (Devlin et al., 2019; Conneau et al., 2020) before using a linear layer to predict the presence or absence of a code-switch.We incorporate global information about each speaker in a dialogue using different prompt styles, generating a prompt P for a given dialogue context D. In theory, these prompts have the potential to change the model's priors by contextualizing dialogue with speaker information and should be more useful for predicting upcoming language switches. We consider two aspects when designing prompts.The prompt describes all speakers S in the dialogue using a set of speaker attributes A = {a 1 , a 2 , . . . , a T }. To create a description P m for speaker s m ∈ S, we combine phrases p sm 1 , p sm 2 , . . . , p sm T , such that each phrase corresponds to exactly one attribute. As Table 1 indicates, we use speaker IDs to tie a speaker to her description, and all prompts cover the full set of attributes, A, for all speakers in D.Form We consider three prompt forms: List, Sentence, and Partner. The prompt form determines both the resulting structure of prompt string P and the way we combine local attribute phrases p j to generate a speaker description P i . Table 1 provides concrete examples of List, Sentence, and Partner prompts for a pair of speakers.List and Sentence prompts do not explicitly relate speakers to each other: the final prompt P = {P 1 , . . . , P m , . . . , P M } concatenates individual speaker prompts P i . List forms combine all attributes in a speaker description P m with commas, while Sentence forms are more prose-like. These prompt forms are most straightforward to implement and simply concatenate each speaker profile without considering interactions of features. The model must implicitly learn how attributes between different speakers relate to one another in a way that influences code-switching behavior.Speaker entrainment or accommodation influences code-switching behavior (Bawa et al., 2020; Myslín and Levy, 2015; Parekh et al., 2020) . Thus, we also created Partner prompts to explicitly highlight relationships between speakers. We hypothesize that these are more useful than the List and Sentence forms, from which the model must implicitly learn speaker relationships. Partner prompts include an initial P i containing attribute qualities that all speakers share:P i := p a j |a j = v k , ∀s ∈ S ,where a j ∈ A and v k is a value taken on by attribute a j . As an example, all speakers may prefer Spanish, so P i will contain an attribute string p i capturing this. The final partner prompt is P partner = {P i , P 1 , P 2 , . . . , P M }, where speaker-specific descriptions P 1 , P 2 , . . . , P M highlight unique values of each speaker.We prepend prompts P to dialogue context D using [EOS] tokens for separation. We do not vary the feature order in a given prompt, but additional prompt tuning may reveal an optimal presentation of features in these prompts.Our proposed setup takes as input the dialogue context and a prepended speaker prompt. To explain predictions of the baseline and our speaker-aware setups, we use SelfExplain (Rajagopal et al., 2021) , a framework for interpreting text-based deep learning classifiers using phrases from the input. Self-Explain incorporates a Locally Interpretable Layer (LIL) and a Globally Interpretable Layer (GIL). GIL retrieves the top-k relevant phrases in the training set for the given instance, while LIL ranks local phrases within the input according to their influence on the final prediction. LIL quantifies the effects that subtracting a local phrase representation from the full sentence have on the resulting prediction. We exclusively use LIL to highlight phrases in the speaker prompts and dialogues to identify both social factors and linguistic context influential to models; through post-hoc analysis, we can reveal whether these features can be corroborated with prior literature or indicate a model's reliance on spurious confounds. We do not use the GIL layer because we do not have instance-level speaker metadata; instead, speaker features are on the dialogue-level and will not yield useful top-k results. Figure 4 illustrates our full proposed model with two classification heads: one for prediction and one for interpretation. §7.1 describes how we score phrases according to their influence on the final prediction.
2
The POM of a verb is defined through the analysis of the contexts in which it occurs within a reference corpus (in our case the BNC) and its frequency within each context. We follow (Hanks, 2006) and conjecture that verbs that occur with similar relative frequency in many different contexts (e.g. 'take') have low POM, while verbs that have just one, or very few, relatively high frequent contexts and some very infrequent contexts (e.g. 'butcher') have high POM.The identification of different contexts is hence the key elements of our method. Along with Hanks (2006) we consider the context of a verb as formed by the subject and/or the object with which it occurs. For the following examples:(i) invest money (ii) invest cash (iii) invest time we consider (i) and (ii) as the same context of use of the verb 'invest', while (iii) as a different context. In order to automatically identify similar contexts of a given verb, we followed a two-steps methodology: firstly, a vector representation of each context in which a target verb occurs was created. In the second step, a clustering algorithm was employed in order to identify similar vector representations and, therefore, similar contexts. As for the realization of the first step, we initially extracted all the sentences in which a target verbs occurs in the British National Corpus. For each sentence we then selected the subject and object of the verb, and matched them with the corresponding vectorial representation, using the dependency based word embeddings (WE) introduced by (Levy and Goldberg, 2014) . WE are low dimensional, dense and real-valued vectors which preserve syntactic and semantic information of words, and that have been proved to be efficient in several NLP tasks, such as detection of relational similarity (Mikolov et al., 2013b) , word similarity tasks (Mikolov et al., 2013a) and contextual similiarity (Melamud et al., 2015) . When both subject and object were available in the same sentence, the context vector was defined by averaging them (Melamud et al., 2015) . Otherwise, if one of the two was not present, the context vector would be equivalent to the available one. In the second step, we identified groups of similar contexts of the verb by clustering the context vectors obtained in phase 1. We used the Birch algorithm for its reliable performances with large sets of data (Zhang et al., 1996) and because the final number of clusters does not have to be previously defined: this is in line with the fact that the number of contexts of a verb is unknown. We used the scikit-learn implementation of the Birch algorithm 4 , whereby it is possible to experiment with different values for each parameter. Silhouette score (Rousseeuw, 1987) , a widely employed metric for the interpretation and validation of clustering results, was employed as an external metric to evaluate the results obtained with the different settings and to select the best one. Thus, the output of phase 2 was, for every target verb, a set of clusters, where each cluster corresponded to a different context (e.g. vector representations of examples (i) and (ii) were clustered together, while the one in (iii) was assigned to a different cluster). Finally, the Standard Deviation (SD) of the relative frequency values of clusters in the set was computed in order to assess the distributional characteristics of the verb. We took the SD value obtained in this way as the POM of the verb. Following our intuition, SD values were expected to be low for verbs occurring with high frequency in several contexts (e.g. 'take') and high for verbs occurring with high frequency in just one or few contexts ('butcher').
2
The proposed UniTranSeR mainly comprises three parts: Unified-modal Transformer Semantic (UTS) encoder (Sec. 3.1), Feature Alignment and Intention Reasoning (FAIR) layer (Sec. 3.2), and Hierarchical Transformer Response (HTR) decoder (Sec. 3.3), as shown in Figure 2 . We define the multimodal dialog generation task as generating the most likely response sequence Y = {y 1 , y 2 , • • • , y n } and selecting top-k most matched images, giving multimodal context utterances U = {u 1 , u 2 , . . . , u |U | } and multimodal knowledge base B as inputs. The probability of a textual response can be formally defined as,P (Y |U, B) = n t=1 P (y t |y 1 , . . . , y t−1 , U, B) (1)where y t represents the current token decoded by the HTR decoder.The UTS encoder is used to project all the multimodal features into a unified vector space for inter-modal interactions, while the FAIR layer is designed to align cross-modal hidden features, with textual features and visual features from previous UTS encoder as inputs. Similar to MAGIC , our HTR decoder is designed to decode three types of responses: general responses that refer to the highly frequent responses (e.g., courtesy greetings) in the conversation, such as "How can I help you?"; intention-aware responses that refer to the task-oriented utterances, such as "Found some similar black leather-jackets for you"; and multimodal responses that refer to the intentionaware responses with image output. The response type is determined by a query vector Q from the FAIR layer, in which an intention classifier is trained to decide which kind of response should be given out.We first use a text embedder and an image embedder to extract textual features and visual features, respectively, and extract informative features from external knowledge by utilizing both text and image embedders. Afterwards, we feed these three kinds of features into a unified Transformer encoder for unified-modal semantic representation learning.Text Embedder. To learn textual intra-modal features, we use a BERT tokenizer to split the input sentence into words and exploit a single transformer layer to obtain these words' initial embeddings. Note the self-attention mechanism in Transformer is order-less. So, it is necessary to encode the words' position as additional inputs. The final representation for each word is derived via summing up its word embedding and position embedding, followed by a layer normalization (LN) layer.Image Embedder. To learn visual intra-modal features, we use a contour slicer to cut the input images into patches and exploit ResNet-50 (He et al., 2016) to extract these patches' visual features. We notice that people usually focus on four parts of a clothing image: head, upper body, lower body, and feet, so we intuitively use an equal-height mode to slice an image into four patches, which efficiently solves the problem of region feature extraction, without using complex target detection networks such as Faster R- CNN (Ren et al., 2015) . Then, we feed the patches into ResNet-50 to get the patches' initial embeddings. Similarly, we also encode the position features for each patch via a 4-dimensional vector [image_index, patch_index, width, height] . Both visual and position features are then fed through a fully-connected (FC) layer, to be projected into the same embedding space. The final visual embedding for each patch is obtained by first summing up the two FC outputs, and then passing them through an LN layer.Knowledge Embedder. To integrate informative features from external knowledge 1 into the task-oriented dialog, we equip the product knowledge base for each utterance through searching a fashion item table provided by MMD. We then treat these searched knowledge entries into the same triplet format, i.e., (product, match, product), (product, attribute, value), (product, celebrity, pas-sion_score). Next, for the text and image elements of these triples, we use the text and image embedders to obtain their respective representations.Unified Transformer Encoder. After obtaining the multimodal initial embeddings, denoted as h t , h v and h k respectively, we project them into a unified semantic space to obtain interactive representations by using a unified Transformer encoder. Specifically, in each utterance, the textual features, visual features and informative features correspond to l tokens with "[TXT]", 4 tokens 2 with "[IMG]" and 4 tokens 3 with "[KNG]". In order to integrate dialog history of previous rounds, we initialize the current [CLS] p by using the representation of the previous round [CLS] p−1 . The output hidden state representations can then be phrased as:H p = f [CLS] p−1 h p t [TXT]h p v [IMG]h p k [KNG](2) where f (•) denotes the Transformer encoder, H p 0 denotes the hidden state representation of the current round [CLS] p , which is regarded as the contextual semantic vector of the entire utterance in this round, H p 1:l denotes the representations for the text sequence, H p l+1:l+4 denotes the representations for the patch sequence, and H p l+5:l+8 denotes the representations for knowledge entries. Note the superscript p is omitted for simplicity if no confusion occurs in the following discussion.To obtain better representations, we introduce the Masked Language Modeling (MLM) loss and Masked Patch Modeling (MPM) loss to train them. We denote the input words as w = {w 1 , . . . , w l }, the image patches as v = {v 1 , . . . , v 4 }, the knowledge elements as k = {k 1 , . . . , k 4 }, and the mask indices as m ∈ N L , where N is the natural numbers and L is the length of masked tokens. In MLM, we randomly mask out the input words with a probability of 15%, and replace the masked ones w m with a special token "[MASK]", as illustrated in Figure 3 . The goal is to predict these masked words by attentively integrating the information of their surrounding words w \m , image patches v and knowledge elements k, by minimizing the following loss:L MLM (θ) = −E (w,v,k)∼U log P θ w m |w \m , v, k(3) Similar to MLM, in MPM, we also randomly mask out the image patches and use zeros tensor to replace them, as shown in Figure 3 . Unlike textual words that can be categorized as discrete labels, visual features are high-dimensional and continuous tensors, thus cannot be supervised via a negative log-likelihood loss. Following UNITER , we built the MPM loss as:L MPM (θ) = E (w,v,k)∼U g θ v m |v \m , w, k (4)where v m are masked image patches and v \m are remaining patches. Note here g θ is defined as an L2 regression function, whereg θ v m |v \m , w, k = L i=1 f θ v (i) m − h v (i) m 2 2 (5)To align the cross-modal features for accurate intention classification and knowledge query, we devise a feature alignment and intention reasoning (FAIR) layer. In feature alignment, we use Image-Text Matching (ITM) and Word-Patch Alignment 4 (WPA) to conduct a two-level alignment. That is, ITM is used to align text and image in sentencelevel, while WPA is used to align each split word and each sliced patch in token-level. In intention reasoning, we fuse f ([CLS]) and aligned entities' hidden state representations to obtain a query vector Q, which is then used for intention classification and knowledge query.Image-Text Matching (ITM). In ITM, we use the output f ([CLS]) of the unified Transformer encoder to compute the match probability of the sampled pair. Specifically, we feed f ([CLS]) into an FC layer and a sigmoid function to predict a probability score P θ (w, v), which is between 0 and 1. During training, we sample a positive or negative pair (w, v) from the dataset D at each step. The negative pair is created by randomly replacing the image or text in the same batch. We employ a binary cross-entropy loss for optimization:EQUATIONwhere y is a binary truth label. Note here we only use ITM to train image-text pairs but without considering the knowledge vector, because it has already matched the textual sequence when being searched out.Word-Patch Alignment (WPA). For more finegrained alignment between each word and image patch, we introduce a WPA technology, which is used to train the consistency and exclusiveness between these cross-modal features to prompt alignment. We use a WPA loss to supervise the process, which is defined as:L WPA (θ) = − l i=1 4 j=1 T ij •φ (w i , v j ) (7)where φ denotes the cos(•) similarity function, T ∈ R l×4 is a ground truth table and each T ij ∈ T is a binary label 0 or 1. During training, we sample positive or negative pairs (w i , v j ) from each multimodal utterance to construct a probability table, as shown in Figure 2 . The above loss function L WPA is then used to update the parameters θ. During inference, we continue to fuse aligned entities' hidden state representation and f ([CLS]) to obtain a unified query vector Q, which contains multimodal query information with entity enhancement, and will be used for subsequent intention reasoning.Intention Classify (IC). Given the query vector Q, this component aims to understand the users' intention and thereafter determine which type of response should be generated. To be clear, there are a total of 17 types labeled in the MMD dataset, and each user's utterance is labeled with a specific intention type. Following MAGIC, we customize the type of response specifically for each intention, as shown in Table 1 . Subsequently, we leverage an MLP layer to predict Q's probability distribution and select the highest probability to generate a response. Besides, a cross-entropy loss is applied to optimizing the intention classifier:L IC (θ) = |U | i=1 17 j=1 I * ij log P θ (I ij | Q) (8)where P θ (I ij | Q) denotes the probability of being predicted as intention I ij , and I * ij is a ground truth label. The intention classifier is trained by the loss function L IC (θ) to update parameter θ, and finally outputs a reliable intention prediction result I in the inference phase.Knowledge Query (KQ). Given the predicted intention result I, this component first determines whether knowledge query is required based on Table 1. If required, we adopt a key-value memory mechanism to query all embedded knowledge triples 5 . Specifically, these embedded knowledge triples are divided into key parts and value parts, which are respectively denoted as vector K and vector V. Note here K is obtained through a linear fusion of the embedded head-entities and relations.The knowledge query process is as follows:α i = Softmax Q T • K i (9) V T = |M | i=1 α i V i (10)where α i denotes the attentive probability score for K i , |M | is the number of knowledge triples, and V T is a weighted sum of V i , which will be used for textual decoding in an intention-aware response.Multi-hop Recommend (MR). Given the predicted intention result I and one-hop query result V T , this component first needs to determine whether an image recommendation is required based on Table 1 . If required, we continue to use V T as a query vector to perform another hop query over the entire knowledge base, which implies that the product images will be recommended, if the key parts of their corresponding triples have high similarity to V T . Specifically,EQUATIONAfter deriving β i , we use V I = {q i }, an image pointer vector, to select images with top β i for recommendation, whereEQUATIONand 1 1×512 is a column vector with each element equal to 1, which denotes for the special token [URL] of the image's link. Note here 512 is the embedding size in our unified Transformer encoder. It is not difficult to see that UniTranSeR can extend the above one-hop knowledge query to multi-hop by iteratively performing attention-based key-value reasoning and ultimately achieve multi-hop image recommendation.As mentioned earlier, we used a hierarchy mechanism to decode different types of response sequences, including general responses, intentionaware responses and multimodal responses. They share the same uni-directional Transformer layer, but the semantic representations fed to this decoder are different. Specifically, for general responses, we just take the sentence-level representations f ([CLS]) as input. For intention-aware responses, we take the concatenation of f ([CLS]) and attentive vector V T followed by an FC layer as input. For multimodal responses, we take the input for the intention-aware responses, as well as V I , the image pointer vector, as input.4 Experimental Setup
2
Our OpenRE framework mainly consists of two modules, the relation similarity calculation module and the relation clustering module. For relation similarity calculation, we propose Relational Siamese Networks (RSNs), which learn to predict whether two sentences mention the same relation. To utilize large-scale unsupervised data and distantly-supervised data, we further propose Semi-supervised RSN and Distantly-supervised RSN. Finally, in the relation clustering module, with the learned relation metric, we utilize hierarchical agglomerative clustering (HAC) and Louvain clustering algorithms to cluster target relation instances of new relation types.The architecture of our Relational Siamese Networks is shown in Figure 2 . CNN modules encode a pair of relational instances into vectors, and several shared layers compute their similarity.Sentence Encoder. We use a CNN module as the sentence encoder. The CNN module includes an embedding layer, a convolutional layer, a max-pooling layer, and a fully-connected (FC) layer. The embedding layer transforms the words in a sentence x and the positions of entities e head and e tail into pre-trained word embeddings and random-initialized position embeddings. Following (Zeng et al., 2014) , we concatenate these embeddings to form a vector sequence. Next, a one-dimensional convolutional layer and a maxpooling layer transform the vector sequence into features. Finally, an FC layer with sigmoid activation maps features into a relational vector v. To summarize, we obtain a vector representation v for a relational sentence with our CNN module:EQUATIONin which we denote the joint information of a sentence x and two entities in it e head and e tail as a data sample s. And with paired input relational instances, we have:EQUATIONin which two CNN modules are identical and share all the parameters. Similarity Computation. Next, to measure the similarity of two relational vectors, we calculate their absolute distance and transform it into a realnumber similarity p ∈ [0, 1]. First, a distance layer computes the element-wise absolute distance of two vectors:EQUATIONThen, a classifier layer calculates a metric p for relation similarity. The layer is a one-dimensionaloutput FC layer with sigmoid activation:EQUATIONin which σ denotes the sigmoid function, k and b denote the weights and bias. To summarize, we obtain a good similarity metric p of relational instances.Cross Entropy Loss. The output of RSN p can also be explained as the probability of two sentences mentioning two different relations. Thus, we can use binary labels q and binary cross entropy loss to train our RSN:L l = E d l ∼D l [q ln(p θ (d l )) + (1 − q) ln(1 − p θ (d l ))], (5)in which θ indicates all the parameters in the RSN. labeled data d l p = 0.7 q = 0 Cross Entropy Relational Siamese Network … (a) Supervised RSN (auto)-labeled data d l q = 0 Cross Entropy (+VAT) Relational Siamese Network … unlabeled data d u … p = 0.7 p = 0.6 Conditional Entropy +VAT (b) Weakly-supervised RSNsTo discover relation clusters in the open-domain corpus, it is beneficial to not only learn from labeled data, but also capture the manifold of unlabeled data in the semantic space. To this end, we need to push the decision boundaries away from high-density areas, which is known as the cluster assumption (Chapelle and Zien, 2005) . We try to achieve this goal with several additional loss functions. In the following paragraphs, we denote the labeled training dataset as D l and a couple of labeled relational instances as d l . Similarly, we denote the unlabeled training dataset as D u and a couple of unlabeled instances as d u .Conditional Entropy Loss. In classification problems, a well-classified embedding space usually reserves large margins between different classified clusters, and optimizing margin can be a promising way to facilitate training. However, in clustering problems, type labels are not available during training. To optimize margin without explicit supervision, we can push the data points away from the decision boundaries. Intuitively, when the distance similarity p between two relational instances equals 0.5, there is a high prob-ability that at least one of two instances is near the decision boundary between relation clusters. Thus, we use the conditional entropy loss (Grandvalet and Bengio, 2005) , which reaches the maximum when p = 0.5, to penalize close-boundary distribution of data points:EQUATIONVirtual Adversarial Loss. Despite its theoretical promise, conditional entropy minimization suffers from shortcomings in practice. Due to neural networks' strong fitting ability, a very complex decision hyperplane might be learned so as to keep away from all the training samples, which lacks generalizability. As a solution, we can smooth the relational representation space with locally-Lipschitz constraint.To satisfy this constraint, we introduce virtual adversarial training (Miyato et al., 2016) on both branches of RSN. Virtual adversarial training can search through data point neighborhoods, and penalize most sharp changes in distance prediction. For labeled data, we haveEQUATIONin which D KL indicates the Kullback-Leibler di- vergence, p θ (d l , t 1 , t 2 )indicates a new distance estimation with perturbations t 1 and t 2 on both input instances respectively. Specifically, t 1 and t 2 are worst-case perturbations that maximize the KL divergence betweenp θ (d l ) and p θ (d l , t 1 , t 2 )with a limited length. Empirically, we approximate the perturbations the same as the original paper (Miyato et al., 2016) . Specifically, we first add a random noise to the input, and calculate the gradient of the KL-divergence between the outputs of the original input and the noisy input. We then add the normalized gradient to the original input and get the perturbed input. And for unlabeled data, we haveEQUATIONin which the perturbations t 1 and t 2 are added to word embeddings rather than the words themselves.To summarize, we use the following loss function to train Semi-supervised RSN, which learns from both labeled and unlabeled data:EQUATIONin which λ v and λ u are two hyperparameters.To alleviate the intensive human labor for annotation, the topic of distantly-supervised learning has attracted much attention in RE. Here, we propose Distantly-supervised RSN, which can learn from both distantly-supervised data and unsupervised data for relational knowledge transfer. Specifically, we use the following loss function:EQUATIONwhich treats auto-labeled data as labeled data but removes the virtual adversarial loss on the autolabeled data. The reason to remove the loss is simple: virtual adversarial training on auto-labeled data can amplify the noise from false labels. Indeed, we do find that the virtual adversarial loss on autolabeled data can harm our model's performance in experiments.We do not use more denoising methods, since we think RSN has some inherent advantages of tolerating such noise. Firstly, the noise will be overwhelmed by the large proportion of negative sampling during training. Secondly, during clustering, the prediction of a new relation cluster is based on areas where the density of relational instances is high. Outliers from noise, as a result, will not influence the prediction process so much.After RSN is learned, we can use RSN to calculate the similarity matrix of testing instances. With this matrix, several clustering methods can be applied to extract new relation clusters.Hierarchical Agglomerative Clustering. The first clustering method we adopt is hierarchical agglomerative clustering (HAC). HAC is a bottomup clustering algorithm. At the start, every testing instance is regarded as a cluster. For every step, it agglomerates two closest instances. There are several criteria to evaluate the distance between two clusters. Here, we adopt the complete-linkage criterion, which is more robust to extreme instances.However, there is a significant shortcoming of HAC: it needs the exact number of clusters in advance. A potential solution is to stop agglomerating according to an empirical distance threshold, but it is hard to determine such a threshold. This problem leads us to consider another clustering algorithm Louvain (Blondel et al., 2008) .Louvain. Louvain is a graph-based clustering algorithm traditionally used for detecting communities. To construct the graph, we use the binary approximation of RSN's output, with 0 indicating an edge between two nodes. The advantage of Louvain is that it does not need the number of potential clusters beforehand. It will automatically find proper sizes of clusters by optimizing community modularity. According to the experiments we conduct, Louvain performs better than HAC.After running, Louvain might produce a number of singleton clusters with few instances. It is not proper to call these clusters new relation types, so we label these instances the same as their closest labeled neighbors.Finally, we want to explain the reason why we do not use some other common clustering methods like K-Means, Mean-Shift and Ward's (Ward Jr, 1963) method of HAC: these methods calculate the centroid of several points during clustering by merely averaging them. However, the relation vectors in our model are high-dimensional, and the distance metric described by RSN is non-linear. Consequently, it is not proper to calculate the centroid by simply averaging the vectors.
2
In this section, we introduce our experimental settings and setup.Our experiments employ Grover (Zellers et al., 2019) as the text generator. We consider three generation configurations in our experiments. They are described as follows:• Model Sizes -Generative models often come with pre-defined sizes that refer to the layer widths and parameterization. For Grover, the model size options include Base, Large, and Mega.• Sampling Method -The sampling function controls the decoding process used to generate text. We explore variants of top-k (Fan et al., 2018) , top-p nucleus sampling , and associated p/k values.• Conditioning -Length of initial article conditioning. We define which is the amount of text given to the model. The initial tokens is concatenated at the end of the title sequence for the model to start generating.In the design of our experiments, while there are countless possibilities to search for, we deliberately sought out settings that are most general and/or are considered fine-grained subtle changes. Such subtle changes are likely to be more challenging to detect compared to larger changes. For example, predicting Grover parameterization subsumes the task of distinguishing Grover versus GPT-2. We assume that if a model is able to solve the former, the latter becomes relatively trivial.We train a classifier model to discriminate between different model configurations. Generally, the task is framed as a multi-class classification problem where each model configuration is a class that is predicted. Models accept a sequence of tokens as an input. Sequences pass through a parameterized or non-parameterized encoder which are finally passed as input to a softmax classification layer.In this work, we explore and benchmark the effectiveness of various encoding inductive biases such as recurrent, convolutional, and self-attention based models. This is primarily motivated as a probe into the problem domain, i.e., by witnessing the behaviour of different encoder architectures, we may learn more about the nature of these tasks/datasets.We consider the following encoding architectures (1) BoW (Linear) -a simple bag-of-words (BoW) baseline that averages the word embeddings and passes the average representation into a single linear classifier. Y = Sof tmax(W (X)). (2) BoW (MLP) -another simple baseline that builds on top of the Linear baseline. We add a single nonlinear layer with ReLU activation function, i.e., Y = Sof tmax(W 2 σ r (W 1 (X))). (3) ConvNet -We consider a 1D Convolution layer of filter width 3. We convolve over the input embeddings and pass the average (representation) into a linear Softmax classification layer. (4) LSTM -Similar to the CNN model, we encode the input sequence with an LSTM layer and pass the mean-pooled representation into a Softmax layer. (4) Transformer Encoders -We use 4-layered multi-headed Transformer (Vaswani et al., 2017) This section outlines our experimental setup.News Corpora As a seed corpus, we use the CNN/Dailymail news corpus. This corpus is widely used in other NLP tasks (Hermann et al., 2015) such as question answering and summarization. The CNN/Dailymail corpus comprises approximately 90K news articles. Given an initial seed corpora of N news articles, we generate an additional collection of N machine generated articles for each configuration.Tasks We define ten tasks as described in Table 1 . These tasks aim at predicting the correct model configuration given the generated text. For all tasks, we use a maximum sequence length of 500 and split the dataset into 80%/10%/10% train, development, and testing splits. We include an additional variant +h which denotes that we add the humanwritten article as an additional class to the mix.Model Training For all models, we fix the word embeddings to d = 64. Embeddings are trained from scratch. All encoder hidden unit size is also set to 64. We tuned the dimensions of models in the range of d ∈ {16, 32, 64, 128, 256} and found no noticable improvement beyond d = 64. We train all models for 50 epochs with a batch size of 64. We employ early stopping with patience 3 if validation accuracy does not improve. Final test accuracy is reported based on the best results on the validation set.
2
We present the datasets explored for binary classification and correlation analyses. We also describe settings for reporting ablation and final results.The SummaC benchmark (Laban et al., 2021) introduces a collection of datasets for binary factual consistency evaluation. A data point is labeled as positive if it contains no factual inconsistencies or is rated the highest possible score in the case of Likert scaling, and as negative otherwise. We now briefly describe the datasets in the benchmark and any departures from the original benchmark, and additional datasets we use for correlation analysis. We refer the reader to Laban et al. (2021) for further details regarding the benchmark creation. XSF Maynez et al. (2020) consists of summaries from the XSum dataset (Narayan et al., 2018) annotated for word-level factual consistency errors.Polytope Huang et al. (2020) propose a typology of eight summarization errors consisting of both content and stylistic errors and annotate model outputs from 10 systems on CNN/DailyMail data. The original SummaC benchmark included the Omission and Addition errors of this proposed typology as factual inconsistencies, but these are largely extractive, factually consistent summaries. We thus label these examples as factually consistent and report results on this modified dataset. QAGs Wang et al. (2020b) crowdsource sentence-level summary annotations for factual consistency across CNN/Daily Mail and XSum data. We only report correlation analysis for this dataset as it was not a part of SummaC.Metric Implementation Metrics were applied directly from the original GitHub repository or by using the SacreRouge Library , which was also used in correlation analysis. The learned metrics make use of code released from Laban et al. (2021) for training, and all models are implemented in PyTorch (Li et al., 2020) and in the Transformers library (Wolf et al., 2019) . The BART-large (QA2D) QG and Electra-large QA models are applied from the QAEval relevance modeling metric .Ablation Settings Following Laban et al. (2021), a metric threshold score for binary classification is determined from the validation set of SummaC and applied to the test set. This threshold score is determined for every metric studied. Furthermore, we note that hyperparameter choices for several of the strong entailment baselines, namely SCConv, SCZeroShot, and MNLI are derived from Laban et al. (2021) , thus providing a reasonable comparison to QAFactEval, whose hyperparameters we tune on the SummaC validation set. For ablation studies, we both perform thresholding and evaluation on the validation set to preserve the integrity of the test set. For each benchmark dataset, we sample a random subset of 80% of the validation set to determine the threshold and evaluate on the remaining 20% of the validation set. The best performing combination of QA metric components constitutes our QAFACTEVAL metric. We take the best performing combination of QA metric components and vary a given component, such as answer selection, while holding all other components constant and consistent with the best component combination.Training Settings To tune the parameters of the learned metrics, we train on a subset of 50k synthetic data points from FactCC, following Laban et al. (2021) . We name these runs synthetic setting due to the lack of human-labeled data. We also experiment with a supervised setting by fine-tuning the parameters on the SummaC validation set for each individual dataset, choosing the threshold on this validation data, and applying the model to the test set. Training on such a small amount of data is feasible due to the small number of parameters of the learned metrics. Cross entropy loss with Adam (Kingma and Ba, 2015) optimizer is used, with a batch size of 32 and a learning rate of 1e-2.
2
The CReST corpus (Eberhard et al., 2010) was used to evaluate speech patterns in 10 teams (20 individuals) of people performing a collaborative, remote, search task. Approximately 8 minutes of language data were extracted for each team. Contained in the corpus is dialogue that occurred before and after a time-limit warning which allows us to examine the effects of time pressure. The corpus also provides an objective measure of the pairs' task performance, which we used to operationalize "effectiveness of communication". Additionally, the members of each team had asymmetrical roles, which we included as an additional factor. Lastly, the corpus was annotated for various linguistic and dialogue events in the speech, including conversational moves and disfluencies (see Eberhard et al. (2010) for additional details about the corpus).The members of each pair were randomly assigned to the Director role and Searcher role. The director was seated in front of a computer that displayed a floor plan map of the search environment and wore a headset for remotely communicating with the searcher. The searcher also wore a headset and was situated in the search environment which consisted of a hallway and 6 connected office rooms. Neither was familiar with the environment. Distributed throughout the environment were 8 blue boxes, each with three colored blocks, 8 empty green boxes (numbered 1-8), 8 empty pink boxes, and a cardboard box that was at the furthest point from entrance at the end of a hallway. Some of the colored boxes were partially hidden behind a door, on a chair, under a table, etc.The pairs were informed that the director's map showed the locations of all the boxes except the green ones and that the locations of some of the blue boxes were inaccurate. They were told that the searcher was to retrieve the cardboard box, put the blue blocks from the blue boxes into it, and report the locations of the green boxes to the director, who was to mark them on the map by dragging green icons numbered 1-8. They were told that instructions for the pink boxes would be given to them later. Five minutes into the task, the director's communication with the searcher was put on hold and the director was told that each blue box contained a yellow block which was to be put into each of the pink boxes. To examine effects of time pressure, the director also was told that they had 3 minutes to complete all of the objectives, and a timer that counted down the 3 minutes was displayed next to the map.Disfluencies were coded according to the HCRC Disfluency Coding Manual (Lickley, 1998) , which includes categories for prolongations, pauses (filled and silent), and self-repairs: repetitions (e.g., "Llook in the box"), substitutions (e.g., "the pink-uh, blue box"), insertions (e.g., "go into the room-the nearby room") and deletions (e.g., "we don't have-let's hurry up"). Disfluency rates were calculated for each participant as a proportion per every 100 words. Speech rate (words per minute, or w.p.m.) and mean length of utterance (average number of words per turn at talk, or MLU) also were calculated.All annotations were carried out using the open-source EXMARaLDA Partitur-Editor (Schmidt and Wörner, 2009) . For extracting disfluency data from the annotated files, we used a custom-built search tool called DeepSearch9 1 .The transcribed utterances were hand-annotated for type of conversational move using Carletta et al. (1997) 's scheme. Initiation moves include Instruct, Explain, Wh-and Yes/No questions. Two other Initiation moves are subcategories of Yes/No questions, namely, Check and Align. Checks seek confirmation that one has correctly understood what the partner recently said, often by repeating or paraphrasing the partner's utterance. Aligns explicitly request confirmation that a partner has understood what was just said and is ready to move on. They typically are in the form of an "okay?" or "right?" appended to the end of an Instruct or Explain move. Response moves include Acknowledge, Wh-, Yes-and No-Replies. Utterance-initial "okays" and "alrights" were coded as Ready moves; they serve as a preparation for the following initiation move (e.g., "Okay, now go into the next room"). The rates of producing each type of move were calculated by dividing them by the total number of utterances.Performance was scored with respect to the number of colored boxes whose task was completed, with a maximum score of 24. The average score was 9.9 (range 1 -19) and the median was 8. The median score was used to divide the 10 teams into an effective and ineffective group with average scores of 14.8 (S.D. = 4.0) and 5.0 (S.D. = 2.5), respectively.
2
We would like to compute the effect of forcing an event of a certain type to occur in the text. The event types that get the largest increase in probability due to this are held to be 'script' events. Computing these quantities falls within the domain of causal inference, and hence will require its tools be used. There are three fundamental steps in causal inference we will need to work through to accomplish this: (1) Define a Causal Model: Identify the variables of interest in the problem, and define causal assumptions regarding these variables, (2) Establish Identifiability: With the given model, determine whether the causal quantity can be computed as a function of observed data. If it can, derive this function and move to (3) Estimation: Estimate this function using observed data. We go through each step in the next three subsections.To best contrast with prior work, we use the event representation of Chambers and Jurafsky (2008) and others (Jans et al., 2012; Rudinger et al., 2015) . A description of this representation is provided in the Supplemental.A causal model defines a set of causal assumptions on the variables of interest in a problem. While there exists several formalisms that accomplish this, in this paper we make use of causal Bayesian networks (CBN) (Spirtes et al., 2000; Pearl, 2000) . CBNs model dependencies be-tween variables graphically in a manner similar to Bayesian networks; the key distinction being that the edges in a CBN posits a direction of causal influence between the variables 3 .We will define our causal model from a top down, data generating perspective in a way that aligns with our conceptual story from the previous section. Below we describe the four types of variables in our model, as well as their causal dependencies.The World, U: The starting point for the generation of our data is the real world. This context is explicitly represented by the unmeasured variable U . This variable is unknowable and in general unmeasurable: we don't know how it is distributed, nor even what 'type' of variable it is. This variable is represented by the hexagonal node in Figure 2 .The Text, T: The next type of variable represents the text of the document. For indexing purposes, we segment the text into chunks T 1 ,...,T N , where N is the number of realis events explicitly mentioned in the text. The variable T i is thus the text chunk corresponding to the i th event mentioned in text. These chunks may be overlapping, and may skip over certain parts of the original text. 4 The causal relationship between various text chunks is thus ambiguous. We denote this by placing bidirectional arrows between the square text nodes in Figure 2 . The context of the world also causally influences the content of the text, hence we include an arrow from U to all text variables, T i .Event Inferences, e: In our story in Section 2, an agent reads a chunk of text and infers the type of event that was mentioned in the piece of text. This inference is represented (for the i th event in text) in our model by the variable e i ∈ E where E is the set of possible atomic event types (described at the end of this section). 53 See Pearl 2000; Bareinboim et al. (2012) for a comprehensive definition of CBNs and their properties.4 Keeping with prior work, we use the textual span of the event predicate syntactic dependents as the textual content of an event. The ordering of variables Ti corresponds to the positions of the event predicates in the text.5 For this study we use the output of information extraction tools as a proxy for the variable ei (see supplemental). As such, it is important to note that there will be bias in computations due to measurement error. Fortunately, there do exists methods in the causal inference literature that can adjust for this bias (Kuroki and Pearl, 2014; Miao et al., 2018) . Wood-Doughty et al. (2018) derive equations in a case setting related to ours (i.e. with measurement bias on the variable being intervened on). Dealing with this issue will be an important next step for future work. The textual content of T i causally influences the inferred type e i , hence directional connecting arrows in Figure 2 .Discourse Representation, D The variable e i represent a high level abstraction of part of the semantic content found in T i . Is this information about events used for later event inferences by an agent reading the text? Prior results in causal network/chain theories of discourse processing (Black and Bower, 1980; Trabasso and Sperry, 1985; Van den Broek, 1990 ) seem to strongly point to the affirmative. In brief, these theories hold that the identities of the events occurring in the textand the causal relations among them -are a core part of how a discourse is represented in human memory while reading, and more-so, that this information significantly affects a reader's event based inferences (Trabasso and Van Den Broek, 1985; Van den Broek and Lorch Jr, 1993 i ∈ E * is a sequence 6 of events that were explicitly stated in the text, up to step i. After each step, the in-text event inferred at i (the variable e i ) is appended to D I i+1 . The causal parents of D I i+1 are thus e i and D I i (which is simply copied over). We posit that the information in D I i provides information in the inference of e i , and thus draw an arrow from D I i to e i . Unstated events not found in the text but inferred by the reader also have an effect on event inferences Ratcliff, 1986, 1992; Graesser et al., 1994) . We thus additionally take this into consideration in our causal model by including an out of text discourse representation variable, D O i ∈ 2 |E| . This variable is a bag of events that a reader may infer implicitly from the text chunk T i using common sense. Its causal parents are thus both the text chunk T i , as well as the world context U ; its causal children are e i . Obtaining this information is done via human annotation and discussed later.D i is thus equal to (D I i , D O i ), and inherits the incoming and outgoing arrows of both in Figure 2 .Our goal is to compute the effect that intervening and setting the preceding event e i−1 to k ∈ E has on the distribution over the subsequent event e i . Now that we have a causal model in the form of Fig. 2 , we can now define this effect. Using the notation of Pearl (2000) , we write this as:p(e i |do(e i−1 = k)) (1)The semantics of do(e i−1 = k) are defined as an 'arrow breaking' operation on Figure 2 which deletes the incoming arrows to e i−1 (the dotted arrows in Figure 2 ) and sets the variable to k. Before a causal query such as Eq. 1 can be estimated we must first establish identifiability (Shpitser and Pearl, 2008) : can the causal query be written as a function of (only) the observed data? Eq. 1 is identified by noting that variables T i−1 and D i−1 meet the 'back-door criterion' of Pearl (1995) , allowing us to write Eq. 1 as the following:EQUATIONOur next step is to estimate the above equation. If one has an estimate for the conditionalp(e i |e i−1 , D i−1 , T i−1 ), then one may "plug it into" Eq. 2 and use a Monte Carlo approximation of the expectation (using samples of (T, D)). This simple plug in estimator is what we use hereIt is important to be aware of the fact that This estimator, specifically when plugging in machine learning methods, is quite naive (e.g. Chernozhukov et al. (2018)), and will suffer from an asymptotic (first order) bias. 7 which prevents one from constructing meaningful confidence intervals or performing certain hypothesis tests. That said, in practice these machine learning based plug in estimators can achieve quite reasonable performance (see for example, the results in Shalit et al. 2017), and since our current use case can be validated empirically, we save the usage of more sophisticated estimators (and proper statistical inference) for future work 8 .Eq. 2 depends on the conditional, p e i = p(e i |e i−1 , D i−1 , T i−1 ), which we estimate via stan-dard ML techniques with a dataset of samples drawn from p(e i , e i−1 , D i−1 , T i−1 ). There are two issues: (1) How do we deal with out-of-text events in D i−1 ?, and (2) What form will p e i take?Dealing with Out-of-Text Events Recall that D i is combination of the variables D I i and D O i . To learn a model for p e i we require samples from the full joint. Out of the box however, we only have access to p(e i , e i−1 , D I i−1 , T i−1 ). If, for the samples in our current dataset, we could draw samples fromp D = p(D O i−1 |e i , e i−1 , D I i−1 , T i−1 ), then we would have access to a dataset with samples drawn from the full joint.In order to 'draw' samples from p D we employ human annotation. Annotators are presented with a human readable form of (e i , e i−1 , D I i−1 , T i−1 ) 9 and are asked to annotate for possible events belonging in D O i−1 . Rather than opt for noisy annotations obtained via freeform elicitation, we instead provide users with a set of 6 candidate choices for members of D O i−1 . The candidates are obtained from various knowledge sources: ConceptNet (Speer and Havasi, 2012) , VerbOcean (Chklovski and Pantel, 2004) , and high PMI events from the NYT Gigaword corpus (Graff et al., 2003) . The top two candidates are selected from each source.In a scheme similar to Zhang et al. (2017), we ask users to rate candidates on an ordinal scale and consider candidates rated at or above a 3 (out of 4) to be considered within D O i−1 . We found annotator agreement to be quite high, with a Krippendorf's α of 0.79. Under this scheme, we crowdsourced a dataset of 2000 fully annotated examples on the Mechanical Turk platform. An image of our annotation interface is provided in the Appendix.The Conditional Model We use neural networks to model p e i . In order to deal with the small amount of fully annotated data available, we employ a finetuning paradigm. We first train a model on a large dataset that does not include annotations for D O i−1 . This model consists of a single layer, 300 dimensional GRU encoder which encodes [D I i−1 , e i−1 ] into a vector v e ∈ R d and a CNN-based encoder which encodes T i−1 into a vector v t ∈ R d . The term p e i is modeled as: p e i ∝ Av e + Bv t 9 In the final annotation experiment, we found it easier for annotators to be only provided the text Ti−1, given that many events in D I i−1 are irrelevant.for matrices A and B of dimension |E| × d. We then finetune this model on the 2000 annotated examples including D O i−1 . We add a new parameter matrix, C, to the previously trained model (allowing it to take D O i−1 as input) and model p e i as:p e i ∝ Av e + Bv t + Cv oThe input v o is the average of the embeddings for the events found in D O i−1 . The parameter matrix C is thus the only set of parameters trained 'from scratch,' on the 2000 annotated examples. The rest of the parameters are initialized and finetuned from the previously trained model. See Appendix for further training details.Provided a model of the conditional p e i we can approximate Eq. 2 via Monte Carlo by taking our annotated dataset of N = 2000 examples and computing the following average:EQUATIONThis gives us a length |E| vectorP k whose l th component,P kl gives p(e i = l|do(e i−1 = k)). We compute this vector for all values of k. Note that this computation only needs to be done once.There are several ways one could extract scriptlike knowledge using this information. In this paper, we define a normalized score over intervenedon events such that the script compatibility score between two concurrent events is defined as:S(e i−1 = k, e i = l) =P kl E j=1P jl (4)We term this as the 'Causal' score in the eval below.
2
Let us now describe our proposal to integrate word embeddings into ROUGE in greater detail.To start off, we will first describe the word embeddings that we intend to adopt. A word embedding is really a function W , where W : w → R n , and w is a word or word sequence. For our purpose, we want W to map two words w 1 and w 2 such that their respective projections are closer to each other if the words are semantically similar, and further apart if they are not. Mikolov et al. (2013b) describe one such variant, called word2vec, which gives us this desired property 2 . We will thus be making use of word2vec.We will now explain how word embeddings can be incorporated into ROUGE. There are several variants of ROUGE, of which ROUGE-1, ROUGE-2, and ROUGE-SU4 have often been used. This is because they have been found to correlate well with human judgements (Lin, 2004a; Over and Yen, 2004; Owczarzak and Dang, 2011) . ROUGE-1 measures the amount of unigram overlap between model summaries and automatic summaries, and ROUGE-2 measures the amount of bigram overlap. ROUGE-SU4 measures the amount of overlap of skip-bigrams, which are pairs of words in the same order as they appear in a sentence. In each of these variants, overlap is computed by matching the lexical form of the words within the target pieces of text. Formally, we can define this as a similarity function f R such that:EQUATIONwhere w 1 and w 2 are the words (could be unigrams or n-grams) being compared. In our proposal 3 , which we will refer to as ROUGE-WE, we define a new similarity function f W E such that:EQUATIONwhere w 1 and w 2 are the words being compared, and v x = W (w x ). OOV here means a situation where we encounter a word w that our word embedding function W returns no vector for. For the purpose of this work, we make use of a set of 3 million pre-trained vector mappings 4 trained from part of Google's news dataset (Mikolov et al., 2013a) for W .Reducing OOV terms for n-grams. With our formulation for f W E , we are able to compute variants of ROUGE-WE that correspond to those of ROUGE, including ROUGE-WE-1, ROUGE-WE-2, and ROUGE-WE-SU4. However, despite the large number of vector mappings that we have, there will still be a large number of OOV terms in the case of ROUGE-WE-2 and ROUGE-WE-SU4, where the basic units of comparison are bigrams.To solve this problem, we can compose individual word embeddings together. We follow the simple multiplicative approach described by Mitchell and Lapata (2008) , where individual vectors of constituent tokens are multiplied together to produce the vector for a n-gram, i.e.,EQUATIONwhere w is a n-gram composed of individual word tokens, i.e., w = w 1 w 2 . . . w n . Multiplication between two vectors W (w i ) = {v i1 , . . . , v ik } and W (w j ) = {v j1 , . . . , v jk } in this case is defined as:EQUATION4 Experiments
2
Our task is to identify plausible donor-loan word pairs in a language pair. While modeling string transductions is a well-studied problem in NLP, we wish to be able to learn the cross-lingual correspondences from minimal amounts of data, so we propose a linguistically-motivated approach: we formulate a scoring model inspired by Optimality Theory (OT; discussed below), in which borrowing candidates are ranked by universal constraints posited to underly the human faculty of language, and the candidates are determined by transduction processes articulated in prior studies of contact linguistics. As shown in figure 1, our model is conceptually divided into three main parts: (1) mapping of orthographic word forms in two languages into a common space of their phonetic representation; (2) generation of loanword pronunciation candidates from a donor word; (3) ranking of generated loanword candidates, based on linguistic constraints of the donor and recipient languages. Parts (1) and (2) are rule-based; whereas (3) is learned. Each component of the model is discussed in detail in the following sections.The model is implemented within a finite-state cascade. Parts (1) and (2) amount to unweighted string transformation operations. In (1), we convert orthographic word forms to their pronunciations in the International Phonetic Alphabet (IPA), these are pronunciation transducers. In (2) we syllabify donor pronunciations, then perform insertion, deletion, and substitution of phonemes and morphemes (affixes), to generate multiple loanword candidates from a donor word. Although string transformation transducers in (2) can generate loanword candidates that are not found in a recipient language vocabulary, such can-didates are filtered out due to composition with the recipient language lexicon acceptor.We perform string transformations from donor to recipient (recapitulating the historical process). However, the resulting relation (i.e., the final composed transducer) is a bidirectional model which can just as well be used to reason about underlying donor forms given recipient forms. To employ the model in a specific direction, one needs to optimize parametersweights on transitions-to generate a desired set of outputs from a specific input. Our model is trained to discriminate a donor word given a loanword. In part (3), candidates are "evaluated" (i.e., scored) with a weighted sum of universal constraint violations. The non-negative weights, which we call "cost vector", constitute our model parameters and are learned using a small training set of donor-recepient pairs. We use a shortest path algorithm to find the path with the minimal cost.OT: constraint-based evaluation Our decision to evaluate borrowing candidates by weighting counts of "constraint violations" is based on Optimality Theory, which has shown that complex surface phenomena can be well-explained as the interaction of constraints on the form of outputs and the relationships of inputs and outputs (Kager, 1999) . Although our linear scoring scheme departs from OT's standard evaluation assumptions (namely, the assumption of an ordinal constraint ranking and strict dominance rather than constraint "weighting"), we are still able to obtain effective models.Although originally a theory of monolingual phonology, OT has been adapted to account for borrowing by treating the donor language word as the underlying form for the recipient language; that is, the phonological system of the recipient language is encoded as a system of constraints, and these constraints account for how the donor word is adapted when borrowed. There has been substantial prior work in linguistics on borrowing in the OT paradigm (Yip, 1993; Davidson and Noyer, 1997; Jacobs and Gussenhoven, 2000; Kang, 2003; Broselow, 2004; Adler, 2006; Rose and Demuth, 2006; Kenstowicz and Suchato, 2006; Kenstowicz, 2007; Mwita, 2009) , but none of it has led to computational realizations.
2
In this section, we introduce our framework for harvesting the question-answer pairs. As described above, it consists of the question generator CorefNQG ( Figure 2) and a candidate answer extraction module. During test/generation time, we (1) run the answer extraction module on the input text to obtain answers, and then (2) run the question generation module to obtain the corresponding questions.As shown in Figure 2 , our generator prepares the feature-rich input embedding -a concatenation of (a) a refined coreference position feature embedding, (b) an answer feature embedding, and (c) a word embedding, each of which is described below. It then encodes the textual input using an LSTM unit (Hochreiter and Schmidhuber, 1997) . Finally, an attention-copy equipped decoder is used to decode the question.More specifically, given the input sentence S (containing an answer span) and the preceding context C, we first run a coreference resolution system to get the coref-clusters for S and C and use them to create a coreference transformed input sentence: for each pronoun, we append its most representative non-pronominal coreferent mention. Specifically, we apply the simple feedforward network based mention-ranking model of Clark and Manning (2016) to the concatenation of C and S to get the coref-clusters for all entities in C and S. The C&M model produces a score/representation s for each mention pair where W m is a 1 × d weight matrix and b is the bias. h m (m 1 , m 2 ) is representation of the last hidden layer of the three layer feedforward neural network. For each pronoun in S, we then heuristically identify the most "representative" antecedent from its coref-cluster. (Proper nouns are preferred.) We append the new mention after the pronoun. For example, in Table 1 , "the panthers" is the most representative mention in the coref-cluster for "they". The new sentence with the appended coreferent mention is our coreference transformed input sentence S (see Figure 2 ).(m 1 , m 2 ), s(m 1 , m 2 ) = W m h m (m 1 , m 2 ) + b m (2) … Decoder LSTMsCoreference Position Feature Embedding For each token in S , we also maintain one position feature f c = (c 1 , ..., c n ), to denote pronouns (e.g., "they") and antecedents (e.g., "the panthers"). We use the BIO tagging scheme to label the associated spans in S . "B_ANT" denotes the start of an antecedent span, tag "I_ANT" continues the antecedent span and tag "O" marks tokens that do not form part of a mention span. Similarly, tags "B_PRO" and "I_PRO" denote the pronoun span. (See Table 1 , "coref. feature".)Refined Coref. Position Feature Embedding Inspired by the success of gating mecha-nisms for controlling information flow in neural networks (Hochreiter and Schmidhuber, 1997; Dauphin et al., 2017) , we propose to use a gating network here to obtain a refined representation of the coreference position feature vectors f c = (c 1 , ..., c n ). The main idea is to utilize the mention-pair score (see Equation 2) to help the neural network learn the importance of the coreferent phrases. We compute the refined (gated) coreference position feature vectorEQUATIONwhere denotes an element-wise product between two vectors and ReLU is the rectified linear activation function. score i denotes the mentionpair score for each antecedent token (e.g., "the" and "panthers") with the pronoun (e.g., "they"); score i is obtained from the trained model (Equation 2) of the C&M. If token i is not added later as an antecedent token, score i is set to zero. W a , W b are weight matrices and b is the bias vector.Answer Feature Embedding We also include an answer position feature embedding to generate answer-specific questions; we denote the answer span with the usual BIO tagging scheme (see, e.g., "the arizona cardinals" in Table 1 ). During training and testing, the answer span feature (i.e., "B_ANS", "I_ANS" or "O") is mapped to its feature embedding space: f a = (a 1 , ..., a n ).To obtain the word embedding for the tokens themselves, we just map the tokens to the word embedding space:x = (x 1 , ..., x n ).Final Encoder Input As noted above, the final input to the LSTM-based encoder is a concatenation of (1) the refined coreference position feature embedding (light blue units in Figure 2 ), (2) the answer position feature embedding (red units), and (3) the word embedding for the token (green units),EQUATIONEncoder As for the encoder itself, we use bidirectional LSTMs to read the input e = (e 1 , ..., e n ) in both the forward and backward directions. After encoding, we obtain two sequences of hidden vectors, namely,− → h = ( − → h 1 , ..., − → h n ) and ← − h = ( ← − h 1 , ..., ← − h n ).The final output state of the encoder is the concatenation of − → h and ← − h whereEQUATIONQuestion Decoder with Attention & Copy On top of the feature-rich encoder, we use LSTMs with attention (Bahdanau et al., 2015) as the decoder for generating the question y 1 , ..., y m one token at a time. To deal with rare/unknown words, the decoder also allows directly copying words from the source sentence via pointing (Vinyals et al., 2015) . At each time step t, the decoder LSTM reads the previous word embedding w t−1 and previous hidden state s t−1 to compute the new hidden state,EQUATIONThen we calculate the attention distribution α t as in Bahdanau et al. (2015) ,EQUATIONwhere W c is a weight matrix and attention distribution α t is a probability distribution over the source sentence words. With α t , we can obtain the context vector h * t ,EQUATIONThen, using the context vector h * t and hidden state s t , the probability distribution over the target (question) side vocabulary is calculated as,P vocab = softmax(W d concat(h * t , s t )) (9)Instead of directly using P vocab for training/generating with the fixed target side vocabulary, we also consider copying from the source sentence. The copy probability is based on the context vector h * t and hidden state s t ,EQUATIONand the probability distribution over the source sentence words is the sum of the attention scores of the corresponding words,EQUATIONFinally, we obtain the probability distribution over the dynamic vocabulary (i.e., union of original target side and source sentence vocabulary) by summing over P copy and P vocab ,EQUATIONwhere σ is the sigmoid function, and W d , W e , W f are weight matrices.We frame the problem of identifying candidate answer spans from a paragraph as a sequence labeling task and base our model on the BiLSTM-CRF approach for named entity recognition (Huang et al., 2015) . Given a paragraph of n tokens, instead of directly feeding the sequence of word vectors x = (x 1 , ..., x n ) to the LSTM units, we first construct the feature-rich embedding x for each token, which is the concatenation of the word embedding, an NER feature embedding, and a character-level representation of the word (Lample et al., 2016). We use the concatenated vector as the "final" embedding x for the token,EQUATIONwhere CharRep i is the concatenation of the last hidden states of a character-based biLSTM. The intuition behind the use of NER features is that SQuAD answer spans contain a large number of named entities, numeric phrases, etc. Then a multi-layer Bi-directional LSTM is applied to (x 1 , ..., x n ) and we obtain the output state z t for time step t by concatenation of the hidden states (forward and backward) at time step t from the last layer of the BiLSTM. We apply the softmax to (z 1 , ..., z n ) to get the normalized score representation for each token, which is of size n × k, where k is the number of tags.Instead of using a softmax training objective that minimizes the cross-entropy loss for each individual word, the model is trained with a CRF (Lafferty et al., 2001 ) objective, which minimizes the negative log-likelihood for the entire correct sequence: − log(p y ),EQUATIONwhere q(x , y) = n t=1 P t,yt + n−1 t=0 A yt,y t+1 , P t,yt is the score of assigning tag y t to the t th token, and A yt,y t+1 is the transition score from tag y t to y t+1 , the scoring matrix A is to be learned. Y represents all the possible tagging sequences.
2
Our main focus was to treat the task as a sequential labeling problem, which in recent NLP research has frequently been tackled using conditional random fields (Lafferty et al., 2001 ), a class of probabilistic graphical model that integrates information from multiple features, and has enjoyed success in tasks such as shallow parsing (Sha and Pereira, 2003) . We apply CRFs to learn a sequential labeler for the shared task on the basis of 4 automatically-extracted features: the surface form of the word (WORD), the part-of-speech of the word in context (POS), IOB tags for verb and noun phrases (CHUNK) and named entity recognition with NE type such as person (NER).POS, CHUNK and NER were extracted using SENNA v3.0 (Collobert et al., 2011) , an off-theshelf shallow parsing system based on a neural network architecture. We chose SENNA for this task due to its near state-of-the-art accuracy on tagging tasks and relatively fast runtime. One challenge in using SENNA is that it expects input to be segmented at the sentence level. However, this information is obviously missing from the stream-of-words provided for the shared task. Restoring sentence boundaries is a non-trivial task, and automatic methods (e.g. Kiss and Strunk, 2006) typically make use of casing and punctuation information (e.g. a period followed by a capitalized word is highly indicative of a sentence boundary). In order to obtain POS, CHUNK and NER tags from SENNA, we segmented the text using a fixed-size sliding window approach. From the original stream of words, we extracted pseudosentences consisting of sequences of 18 consecutive words. The start of each sequence was offset from the previous sequence by 6 words, resulting in each word in the original appearing in three pseudo-sentences (except the words right at the start and end of the stream). SENNA was used to tag each pseudo-sentence, and the final tag assigned to each word was the majority label amongst the three. The rationale behind this overlapping window approach was to allow each word to appear near the beginning, middle, and end of a pseudo-sentence, in case sentence position had an effect on SENNA's predictions. In practice, for over 92% of words all predictions were the same. We did not carry out an evaluation of the accuracy of SENNA's predictions due to a lack of goldstandard data, but anecdotally we observed that the POS and CHUNK output generally seemed sensible. We also observed that for NER, the output appeared to achieve high precision but rather low recall; this is likely due to SENNA normally utilizing casing and punctuation in carrying out NER.To implement our sequence labeler, we made use of CRFSUITE version 0.12 (Okazaki, 2007) . CRFSUITE provides a set of fast command-line tools with a simple data format for training and tagging. For our training, we used L2-regularized stochastic gradient descent, which we found to converge faster than the default limited-memory BFGS while attaining comparable extrema. We also made use of the supplied tools to facilitate sequential attribute generation. We based our feature template on the example template included with CRFSUITE for a chunking task. For WORD, single words are considered for a (-2,2) context (i.e. two words before and two words after, as well as word bigrams including the current word. For POS, CHUNK and NER, we used a (-2,2) context for single tags, bigrams and trigrams. This means that for word bigrams, we also utilized features that captured (1) two words before, and (2) two words after, in both cases excluding the target word itself.We treated the task as a joint learning problem over the casing and punctuation labels, reasoning that the two tasks are highly mutually informative, Table 1 : F-score attained by adding each feature incrementally, using only the organizer-supplied train data, broken down over the two component tasks. The average of the two components is the metric by which the shared task was judged.as certain punctuation strongly influences the casing in the immediate context, e.g. a period often ends a sentence and thus a word followed by a period is expected to be followed by a capitalized word. We trained the labeler to output four distinct labels: (FF) the word should not be capitalized and should not be followed by punctuation, (FT) the word should not be capitalized and should be followed by punctuation, (TF) the word should be capitalized and should not be followed by punctuation, and (TT) the word should be capitalized and should be followed by punctuation. We applied the same pseudo-sentence segmentation to the text that was carried out to pre-process the word stream for use with SENNA, and again the majority label amongst the three predictions was used as the final output. Table 1 summarizes the effect of adding each feature to the system, using only the "basic" training data. The result attained using only word features is marginally better than the organizersupplied hidden Markov model baseline (0.461 vs 0.449). The biggest gain is seen by adding POS, and further improvements are achieved by using CHUNK and NER.
2
The structure of our model and details of each component is shown in figure 2. We can see the overall architecture in the middle. It is divided into three components from bottom to top: 1) A text encoder which is employed to obtain text vector representations; 2) A multi-granularity hierarchical feature extractor which can exploit effective structured features from text representations; 3) A feature aggregation layer which aggregate previous multi-granularity features for relation prediction. In this section, we will introduce details of three components.Firstly, we formalize the relation extraction task. Let x = {x 1 , x 2 , ..., x n } be a sequence of input tokens, where x 0 = [CLS] and x n = [SEP] are special start and end tokens for BERT-related encoders. Let s 1 = (i, j) and s 2 = (k, l) be pairs of entity indices. The indices in s 1 and s 2 delimit entities in x:[x i , . . . , x j−1 ] and [x k , . . . , x l−1 ]. Our goal is to learn a function P (r) = f θ (x, s 1 , s 2 ),where r ∈ R indicates the relation between the entity pairs, which is marked by s1 and s2. R is a pre-defined relation set.We first employ a text encoder (e.g. BERT) to map tokens in input sentences into vector representations which can be formalized by Equ. (1).H = {h 0 , . . . , h n } = f encoder (x 0 , . . . , x n ) (1)Where H = {h 0 , . . . , h n } is the vector representation of input sentences.Our work is built upon H and does not need any external information. We employ a max-pooling operation to obtain shallow features of entity pairs and input sentences. h e 1 = Maxpooling(h i:j ) and h e 2 = Maxpooling(h k:l ) are the representations of entity pairs. h g = Maxpooling(H) is the vector representation of input sentences which contains global semantic information. The multi-granularity hierarchical feature extractor is the core component of our method and it consists of three attention mechanism for different granularity features extraction: 1) mention attention which is designed to entity mention features of given entity pairs; 2) mention-aware segment attention which is based on the entity mention features from previous mention attention and aim to extract core segment level feature which is related to entity mentions; 3) global semantic attention which focuses on the sentence level feature.The structure of mention attention is shown in the right bottom of Figure 2 . To capture more information about given entity pairs from input sentences, we extract entity mention level features by modeling the co-references (mentions) of entities implicitly. We employ a mention attention to capture information about entity 1 and 2 respectively. Specifically, we can use the representation of an entity as a query to obtain the entity mention feature from H by Equ. (2).EQUATIONWhere d is the dimension of vector representation and used to normalize vectors. Then, h ′ e 1 and h ′ e 2 model the mentions of given entity pairs implicitly and contain more entity semantic information than h e 1 and h e 2 .The structure of mention-aware segment attention is shown in the right top of Figure 2 . And the mention-aware segment attention is a hierarchical structure based on the entity mention features h ′ e 1 and h ′ e 2 from mention attention. Before introducing mention-aware segments attention, we first introduce how to get the representations of segments. We employ convolutional neural networks (CNN) with different kernel sizes to obtain all n-gram segments in texts, which can effectively capture local n-gram information with Equ. (3).EQUATIONWhere t is the kernel size of CNN and is empirically set as t ∈ {1, 2, 3} which means extract 1gram, 2-gram ,and 3-gram segment level features.Intuitively, the valuable segments should be highly related to given entity pairs, which can help the model to decide the relation of given entity pairs. Entity mention features h ′ e 1 and h ′ e 2 contain comprehensive information of given entity pairs and H t contain 1,2,3-gram segment level features. We can extract mention-aware segment level features by simply linking them with attention mechanisms by Equ. (4).h t m = Softmax( H t • (W m [h ′ e 1 ; h ′ e 2 ]) √ d ) • H t (4)Then, we get {h t m } t=1,2,3 which capture different granularity segments features.The structure of global semantic attention is shown in the left bottom of Figure 2 . Previous works always directly concatenate vector representation [h e 1 ; h e 2 ; h g ] as the global semantic feature of input text. We argue this is not enough to help model capture deeper sentence level semantic information for RE. Different from them, to obtain better global sentence-level semantic feature, we employ an attention operation called global semantic attention which use the concatenation of [h e 1 ; h e 2 ; h g ] as query to capture deeper semantic feature from context representation H by Equ. 5.h s = Softmax( H • (W s [h e 1 ; h e 2 ; h g ]) √ d ) • H (5)Where W s ∈ R d×3d is a linear transform matrix, and d is a hidden dimension of vectors. The concatenation of [h e 1 ; h e 2 ; h g ] is used as a query of the attention operation, which can force the extracted global semantic representation h s contain entity mention related sentence level feature.The structure of the feature aggregation layer is shown in the left top of Figure 2 . We aggregate previous multi-granularity features by Equ. (6).EQUATIONWhere W a ∈ R 6d×d is a linear transform matrix and ReLU is a nonlinear activation function.Finally, we employ a softmax function to output the probability of each relation label as follows:EQUATIONThe whole model is trained with cross entropy loss function. We call the multi-granularity hierarchical feature extractor: SMS (relation extraction with Sentence level, Mention level and mention-aware Segment level features). Tacred Semeval lr 3e-5 2e-5 warmup steps 300 0 batch size 64 32 V100 GPU 4x 1x epochs 4 10 max length 128 128 (Hendrickx et al., 2010 ) is a public dataset which contains 10,717 instances with 9 relations. The training/validation/test set contains 7,000/1,000/2,717 instances respectively.Tacred 3 is one of the largest, most widely used crowd-sourced datasets for Relation Extraction (RE), which is introduced by (Zhang et al., 2017) , with 106,264 examples built over newswire and web text from the corpus used in the yearly TAC Knowledge Base Population (TAC KBP) challenges. The training/validation/test set contains 68,124/22,631/15,509 instances respectively. It covers 42 relation types including 41 relation types and a no_relation type and contains longer sentences with an average sentence length of 36.4.Tacred Revisited 4 was proposed by (Alt et al., 2020) which aims to improve the accuracy and reliability of future RE method evaluations. They validate the most challenging 5K examples in the development and test sets using trained annotators and find that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled. Then, they relabeled the test set and released the Tacred Revisited dataset.
2
For designing the model we followed some standard preprocessing steps, which are discussed below.The following steps were applied to preprocess and clean the data before using it for training our character based neural machine translation model. We used the NLTK toolkit 3 for performing the steps.• Tokenization: Given a character sequence and a defined document unit, tokenization is the task of chopping it up into pieces, called tokens. In our case, these tokens were words, punctuation marks, numbers. NLTK supports• Truecasing: This refers to the process of restoring case information to badly-cased or non-cased text (Lita et al., 2003) . Truecasing helps in reducing data sparsity.• Cleaning: Long sentences (# of tokens > 80) were removed.Neural machine translation (NMT) is an approach to machine translation that uses neural networks to predict the likelihood of a sequence of words. The main functionality of NMT is based on the sequence to sequence (seq2seq) architecture, which is described in Section 2.2.1.Sequence to Sequence learning is a concept in neural networks, that helps it to learn sequences. Essentially, it takes as input a sequence of tokens (characters in our case)X = {x 1 , x 2 , ..., x n }and tries to generate the target sequence as outputY = {y 1 , y 2 , ..., y m }where x i and y i are the input and target symbols respectively. Sequence to Sequence architecture consists of two parts, an Encoder and a Decoder.The encoder takes a variable length sequence as input and encodes it into a fixed length vector, which is supposed to summarize its meaning and taking into account its context as well. A Long Short Term Memory (LSTM) cell was used to achieve this. The uni-directional encoder reads the characters of the Finnish texts, as a sequence from one end to the other (left to right in our case),h t = f enc (E x (x t ), h t-1 )Here, E x is the input embedding lookup table (dictionary), f enc is the transfer function for the Long Short Term Memory (LSTM) recurrent unit. The cell state h and context vector C is constructed and is passed on to the decoder.The decoder takes as input, the context vector C and the cell state h from the encoder, and computes the hidden state at time t as,s t = f dec (E y (y t-1 ), s t-1 , c t )Subsequently, a parametric function out k returns the conditional probability using the next target symbol k.(y t = k | y < t, X) = 1 Z exp(out k (E y (y t −1), s t , c t ))Z is the normalizing constant,j exp(out j (E y (y t − 1), s t , c t ))The entire model can be trained end-to-end by minimizing the log likelihood which is defined asL = − 1 N N n=1 T y n t=1 logp(y t = y t n , y ¡t n , X n )where N is the number of sentence pairs, and X n and y t n are the input sentence and the t-th target symbol in the n-th pair respectively.The input to the decoder was one hot tensor (embeddings at character level) of English sentences while the target data was identical, but with an offset of one time-step ahead.For training the model, we preprocessed the Finnish and English texts to normalize the data. Thereafter, Finnish and English characters were encoded as One-Hot vectors. The Finnish characters were considered as the input to the encoder and subsequent English characters was given as input to the decoder. A single LSTM layer was used to encode the Finnish characters. The output of the encoder was discarded and only the cell states were saved for passing on to the decoder. The cell states of the encoder and the English characters were given as input to the decoder. Lastly, a Dense layer was used to map the output of the decoder to the English characters, that were mapped with an offset of 1. The batch size was set to 128, number of epochs was set to 100, activation function was softmax, optimizer chosen was rmsprop and loss function used was categorical cross-entropy. Learning rate was set to 0.001. The architecture of the constructed model is shown in Figure 1 .
2
Hypothesis: contextualized word embedding as a sparse linear superposition of transformer factors. It is shown that word embedding vectors can be factorized into a sparse linear combination of word factors (Arora et al., 2018; Zhang et al., 2019) , which correspond to elementary semantic meanings. An example is: apple =0.09"dessert" + 0.11"organism" + 0.16 "fruit" + 0.22"mobile&IT" + 0.42"other".We view the latent representation of words in a transformer as contextualized word embedding. Similarly, we hypothesize that a contextualized word embedding vector can also be factorized as a sparse linear superposition of a set of elementary elements, which we call transformer factors.The exact definition will be presented later in this section. Due to the skip connections in each of the transformer blocks, we hypothesize that the representation in any layer would be a superposition of the hierarchical representations in all of the lower layers. As a result, the output of a particular transformer block would be the sum of all of the modifications along the way. Indeed, we verify this intuition with the experiments. Based on the above observation, we propose to learn a single dictionary for the contextualized word vectors from different layers' output.To learn a dictionary of transformer factors with non-negative sparse coding.Given a set of tokenized text sequences, we collect the contextualized embedding of every word using a transformer model. We define the set of all word embedding vectors from lth layer of transformer model as X (l) . Furthermore, we collect the embeddings across all layers into a single setX = X (1) ∪ X (2) ∪ • • • ∪ X (L) .By our hypothesis, we assume each embedding vector x ∈ X is a sparse linear superposition of transformer factors:EQUATIONwhere Φ ∈ IR d×m is a dictionary matrix with columns Φ :,c , α ∈ IR m is a sparse vector of coefficients to be inferred and is a vector containing independent Gaussian noise samples, which are assumed to be small relative to x. Typically m > d so that the representation is overcomplete. This inverse problem can be efficiently solved by FISTA algorithm (Beck and Teboulle, 2009) . The dictionary matrix Φ can be learned in an iterative fashion by using non-negative sparse coding, which we leave to the appendix section C. Each column Φ :,c of Φ is a transformer factor and its corresponding sparse coefficient α c is its activation level.Visualization by top activation and LIME interpretation. An important empirical method to visualize a feature in deep learning is to use the input samples, which trigger the top activation of the feature (Zeiler and Fergus, 2014). We adopt this convention. As a starting point, we try to visualize each of the dimensions of a particular layer, X (l) . Unfortunately, the hidden dimensions of transformers are not semantically meaningful, which is similar to the uncontextualized word embeddings (Zhang et al., 2019) . Instead, we can try to visualize the transformer factors. For a transformer factor Φ :,c and for a layer-l, we denote the 1000 contextualized word vectors with the largest sparse coefficients α (l) , which correspond to 1000 different sequences. For example, Figure 3 shows the top 5 words that activated transformer factor-17 Φ :,17 at layer-0, layer-2, and layer-6 respectively. Since a contextualized word vector is generally affected by many tokens in the sequence, we can use LIME (Ribeiro et al., 2016) to assign a weight to each token in the sequence to identify their relative importance to α c . The detailed method is left to Section 3.(l) c as X (l) c ⊂ XTo determine low-, mid-, and high-level transformer factors with importance score. As we build a single dictionary for all of the transformer layers, the semantic meaning of the transformer factors has different levels. While some of the factors appear in lower layers and continue to be used in the later stages, the rest of the factors may only be activated in the higher layers of the transformer network. A central question in representation learning is: "where does the network learn certain information?" To answer this question, we can compute an "importance score" for each transformer factor Φ :,c at layer-l as I(l) c . I (l)c is the average of the largest 1000 sparse coefficients α(l) c 's, which cor- respond to X (l)c . We plot the importance scores for each transformer factor as a curve is shown in Figure 2 . We then use these importance score (IS) curves to identify which layer a transformer factor emerges. Figure 2a shows an IS curve peak in the earlier layer. The corresponding transformer factor emerges in the earlier stage, which may capture lower-level semantic meanings. In contrast, Figure 2b shows a peak in the higher layers, which indicates the transformer factor emerges much later and may correspond to mid-or high-level semantic structures. More subtleties are involved when distinguishing between mid-level and high-level factors, which will be discussed later.An important characteristic is that the IS curve for each transformer factor is relatively smooth. This indicates if a vital feature is learned in the beginning layers, it won't disappear in later stages. Instead, it will be carried all the way to the end with gradually decayed weight since many more features would join along the way. Similarly, abstract information learned in higher layers is slowly developed from the early layers. Figure 3 and 5 confirm this idea, which will be explained in the next section.
2
PHS-BERT has the same architecture as BERT. Fig. 1 illustrates an overview of pretraining, finetuning, and datasets used in this study. We describe BERT and then the pretraining and fine-tuning process employed in PHS-BERT.PHS-BERT has the same architecture as BERT. BERT was trained on 2 tasks: mask language mod- Figure 1 : An overview of pretraining, fine-tuning, and the various tasks and datasets used in PHS benchmarking eling (MLM) (15% of tokens were masked and next sentence prediction (NSP) (Given the first sentence, BERT was trained to predict whether a selected next sentence was likely or not). BERT is pretrained on Wikipedia and BooksCorpus and needs task-specific fine-tuning. Pretrained BERT models include BERT Base (12 layers, 12 attention heads, and 110 million parameters), as well as BERT Large (24 layers, 16 attention heads, and 340 million parameters).We followed the standard pretraining protocols of BERT and initialized PHS-BERT with weights from BERT during the training phase instead of training from scratch and used the uncased version of the BERT model. PHS-BERT is the first domain-specific LM for tasks related to PHS and is trained on a corpus of health-related tweets that were crawled via the Twitter API. Focusing on the tasks related to PHS, keywords used to collect pretraining corpus are set to disease, symptom, vaccine, and mental healthrelated words in English. Pre-processing methods similar to those used in previous works (Müller et al., 2020; Nguyen et al., 2020) were employed prior to training. Retweet tags were deleted from the raw corpus, and URLs and usernames were replaced with HTTP-URL and @USER, respectively. Additionally, the Python emoji 3 library was used to replace all emoticons with their associated meanings. The HuggingFace 4 , an open-source python library, was used to segment tweets. Each sequence of BERT LM inputs is converted to 50,265 vocab-ulary tokens. Twitter posts are restricted to 200 characters, and during the training and evaluation phase, we used a batch size of 8. Distributed training was performed on a TPU v3-8.We applied the pretrained PHS-BERT in the binary and multi-class classification of different PHS tasks such as stress, suicide, depression, anorexia, health mention classification, vaccine, and covid related misinformation and sentiment analysis. We fine-tuned the PLMs in downstream tasks. Specifically, we used the ktrain library (Maiya, 2020) to fine-tune each model independently for each dataset. We used the embedding of the special token [CLS] of the last hidden layer as the final feature of the input text. We adopted the multilayer perceptron (MLP) with the hyperbolic tangent activation function and used Adam optimizer (Kingma and Ba, 2014) . The models are trained with a one cycle policy (Smith, 2017) at a maximum learning rate of 2e-05 with momentum cycled between 0.85 and 0.95.In social media platforms, people often use disease or symptom terms in ways other than to describe their health. In data-driven PHS, the health mention classification task aims to identify posts where users discuss health conditions rather than using disease and symptom terms for other reasons. We used PHM (Karisani and Agichtein, 2018), HMC2019 (Biddle et al., 2020) and RHMD 5 health mention-related datasets.5 https://github.com/usmaann/RHMD-Health-Mention-Dataset 4. Vaccine sentiment: Vaccines are a critical component of public health. On the other hand, vaccine hesitancy and refusal can result in clusters of low vaccination coverage, diminishing the effectiveness of vaccination programs. Identifying vaccine-related concerns on social media makes it possible to determine emerging risks to vaccine acceptance. We used VS1 (Dunn et al., 2020) and VS2 (Müller and Salathé, 2019) vaccine-related Twitter datasets to show the effectiveness of our model.We used the following dataset to evaluate the performance of our model on suicide risk detection.• R-SSD: For suicide ideation, we used a dataset released by Cao et al. (2019) , which contains 500 individuals' Reddit postings categorized into 5 increasing suicide risk classes from 9 mental health and suicide-related subreddits.
2
In this section, we provide a detailed description of the algorithm behind the construction of CWN. The system takes as input the WordNet lexical database and a set of collocation lists pertaining to predefined semantic categories, and outputs CWN. First, we collect training data and perform automatic disambiguation (Section 3.1). Then, we use this disambiguated data for training a linear transformation matrix from the base vector space, i.e., SENSEMBED, to the collocate vector space, i.e., SHAREDEMBED (Section 3.2). Finally, we exploit the WordNet taxonomy to select input base collocates to which we apply the transformation matrix in order to obtain a sorted list of candidate collocates (Section 3.3).As is common in previous work on semantic collocation classification (Moreno et al., 2013; , our training set consists of a list of manually annotated collocations. For this purpose, we randomly selected nouns from the Macmillan Dictionary and manually classified their corresponding collocates with respect to their semantic categories. 8 Note that there may be more than one collocate for each base. Since collocations with different collocate meanings are not evenly distributed in language (e.g., we may tend to use more often collocations conveying the idea of 'intense' and 'perform' than 'begin to perform'), the number of instances per category in our training data also varies significantly (see Table 1 ).Our training dataset consists at this stage of pairs of plain words, with the inherent ambiguity this gives raise to. We surmount this challenge by applying a disambiguation strategy based on the notion that, from all the available senses for a collocation's base and collocate, their correct senses are those which are most similar. This is a strategy that has been proved effective in previous concept-level disambiguation tasks (Delli Bovi et al., 2015) . Formally, let us denote the SENSEMBED vector space as S, and our original text-based training data as T. For each training collocation b, c ∈ T we consider all the available lexicalizations (i.e., senses) for both the base b and the collocate c in S, namely L b = {l 1 b ...l n b }, and L c = {l 1 c ...l m c }, and their corresponding set of sense embeddings V b = { v 1 b , ..., v{ v 1 c , ..., v m c }.Our aim is to select, among all possible pairs of senses, the pair l b , l c that maximizes the cosine similarity between the corresponding embeddings v b and v c , which is computed as follows:EQUATIONOur disambiguation strategy yields a set of disambiguated pairs D. This is the input for the next module of the pipeline, the learning of a transformation matrix aimed at retrieving WordNet synset collocates for any given WordNet synset base.Among the many properties of word embeddings (Mikolov et al., 2013a; Mikolov et al., 2013c) that have been explored so far in the literature (e.g., modeling analogies or projecting similar words nearby in the vector space), the most pertinent to this work is the linear relation that holds between semantically similar words in two analogous spaces (Mikolov et al., 2013b) . Mikolov et al.'s original work learned a linear projection between two monolingual embeddings models to train a word-level machine translation system between English and Spanish. Other examples include the exploitation of this property for language normalization, i.e. finding regular English counterparts of Twitter language (Tan et al., 2015) , or hypernym discovery .In our specific case, we learn a linear transformation from v b to v c , aiming at reflecting an inherent condition of collocations. Since collocations are a linguistic phenomenon that is more frequent in the narrative discourse than in formal essays, they are less likely to appear in an encyclopedic corpus (recall that SENSEMBED vectors, which we use, are trained on a dump of the English Wikipedia). This motivates the use of S as our base space, and our SHAREDEMBED X as the collocate model, as it was trained over more varied language such as blog posts or news items.Then, we construct our linear transformation model as follows: For each disambiguated collocation l b , l c ∈ D, we first retrieve the corresponding base vectors v b . Next, we exploit the fact that X contains both BabelNet synsets and words, and derive for each l c two items, namely the vectors associated to its lexicalization (word-based) and its BabelNet synset. For example, for the training pair ardent bn:00097467a, desire bn:00026551n ∈ D, we learn two linear mappings, namely ardent bn:00097467a → desire and ardent bn:00097467a → bn:00026551n. We opt for this strategy, which doubles the size of the training data in most lexical functions (depending on coverage), due to the lack of resources of manually-encoded classification of collocations. By following this strategy we obtain an extended training set D* = { b i , c i } n i=1 ( b i ∈ X , c i ∈ S, n ≥ |D|).Then, we construct a base matrix B = b 1 . . . b n and a collocate matrix C = [ c 1 . . . c n ] with the resulting set of training vector pairs. We use these matrices to learn a linear transformation matrix Ψ ∈ R d S ×d X , where d S and and d X are, respectively, the number of dimensions of the base vector space (i.e., SENSEMBED) and the collocate vector space (SHAREDEMBED). 9 Following the notation in Tan et al. (2015) , this transformation can be depicted as:BΨ ≈ CAs in Mikolov et al.'s original approach, the training matrix is learned by solving the following optimization problem:min Ψ n i=1 b i − c i 2Having trained Ψ, the next step of the pipeline is to apply it over a subset of WordNet's base concepts and their hyponyms. For each synset in this branch, we apply a scoring and ranking procedure which assigns a collocates-with score. If such score is higher than a predefined threshold, tuned over a development set, this relation is included in CWN.During the task of enriching WordNet with collocational information, we first gather a set of base Word-Net synsets by traversing WordNet hypernym hierarchy starting from those base concepts that are most fit for the input semantic category. 10 Then, the transformation matrix learned in Section 3.2 is used to find candidate WordNet synset collocates (mostly verbs or adjectives) for each base WordNet synset.As explained in Section 3, WordNet synsets are mapped to BabelNet synsets, which in turn map to as many vectors in SENSEMBED as their associated lexicalizations. Formally, given a base synset b, we apply the transformation matrix to all the SENSEMBED vectorsV b = { v 1 b , ..., v n b } associated with its lexicalizations. For each v i b ∈ V b , we first get the vector ψ i b = v i b Ψobtained as a result of applying the transformation matrix and then we gather the subset W i b = { w i,1 b . . . w i,10 b } ( w i,j b ∈ X ) of the top ten closest vectors by cosine similarity to ψ i b in the SHAREDEMBED vector space X . Each w i,j b is ranked according to a scoring function λ(•), which is computed as follows 11 :λ( w i,j b ) = cos( ψ i b , w i,j b ) j .This scoring function takes into account both the cosine similarity as well as the relative position 12 of the candidate collocate with respect to other neighbors in the vector space. Apart from sorting the list of candidate collocates, this scoring function is also used to measure the confidence of the retrieved collocate synsets in CWN.
2
Three different models were developed to identify hate or offensive contents in Dravidian posts; (i) conventional learning based models, (ii) neural network-based models, and (iii) transfer learningbased models. In this section, we explain the working of each model in detail. A detailed diagram for presented models is shown in Figure 1 . The results of the models are explained in Section 5.In conventional machine learning-based classifications, the current study explored the use of different N-gram TF-IDF word and character features. In the case of character, 1 to 6 gram character TF-IDF features were used, whereas, in case of a word, 1 to 3 gram word TF-IDF features were used. The extracted features were fed to classifiers like Support Vector Machine (SVM), Logistic Regression (LR), Naive Bayes (NB), Random Forest (RF). The detailed performance report of word n-grams and character n-grams are shown in Section 5.Initially, the character n-grams TF-IDF features (1-6 grams) extracted in previous Section 4.1 were used as an input to a vanilla neural network (VNN) model. For the vanilla neural network, four fully connected layers were sequenced, having 1024, 256, 128, and 2 neurons in first, second, third and fourth layer, respectively. We kept two neurons in the final layer (or output layers) to identify each input in offensive groups. Based on the probabilities of softmax activation with output neurons, the last class was determined. In the intermediate layers, the activation function was ReLu. The proposed vanilla neural network was trained with cross-entropy loss function and Adam optimizer. The training dropout was 0.3 and the batch size was 32.Consequently, other deep learning models for offensive groups prediction were also developed. A hybrid attention-based Bi-LSTM and CNN network was built as shown in Figure 1 . The detailed working of the CNN and attention-based Bi-LSTM network for text classification can be seen in (Jang et al., 2020; Xu et al., 2020; Saumya et al., 2019) . To CNN, character embedding was the input, whereas to Bi-LSTM, word embedding was the input. To prepare the character embedding, a one-hot vector representation of characters were used. Every input was padded with a maximum of 200 characters with repetition. The total unique character found in the vocabulary was 70. Therefore, a (200 × 70) dimensional embedding matrix was given as an input to CNN. To extract the features from the convolution layer, 128 different filters for each 1-gram, 2-gram, 3-gram, and 4-gram were used. The output of the first convolution layer was fed to the second convolution layer with similar filter dimensions. The features extracted from the CNN layers were then represented in a vector having 128 features using a dense layer.To prepare the word embedding input for Bi-LSTM was we used FastText 1 utilizing the language-specific code-mixed Tamil and Malayalam text for Tamil and Malayalam models, respec- tively. The skip-gram architecture was trained for ten epochs to extract the FastText embedding vectors. A maximum of 30 words embedding vectors was given input to the network in a time stamp manner. Every word was represented in a 100dimensional vector which was extracted from the embedding layer. Finally, a (30 × 100) dimensional matrix input was given to 2-layered stacked Bi-LSTM layer, followed by an attention layer. Finally, the output of attention-based Bi-LSTM and CNN layer is concatenated and passes through a softmax layer to predict offensive and not-offensive text.Hyperparameters tuning was done to check the performance of the proposed deep-neural model. We conducted comprehensive experiments by adjusting the learning rate, batch size, optimizer, epoch, loss function and activation function. The system performance was best with the learning rate 0.001, batch size 32, Adam optimizer, epochs 100, loss function as binary cross-entropy, and ReLU activation within the internal layers of the network. At the output layer, the activation was softmax.The current study used two different transfer models, BERT (Bidirectional Encoder Representations from Transformers ) and ULMFiT (Universal Language Model Fine-tuning for Text Classification) to accomplish the given objectives.Two different variations of BERT model 2 (Devlin et al., 2018b) is used in the current study; (i) BERT base (bert-base-uncased), and (ii) BERT multilingual (bert-base-multilingualuncased). The BERT base model is trained for English language using a masked modelling technique. Whereas, BERT multilingual is trained for 102 languages with masked language modelling. We used ktrain 3 libraries to develop the BERT based models. Both BERT variations are uncased that means it does not make a difference between a word written in upper case lower case. In training BERT-models, we fixed 30-words for the text to input in the model and used a batch size of 32 and a learning rate of 2e −5 to fine-tune the pre-trained model. The detailed description of the BERT model can be seen in (Sanh et al., 2019) . The other transfer model used was ULMFiT. It can be applied to any task in NLP. To train ULMFiT model, we used fastai library 4 . The input and hyper-parameters were the same as we used in BERT.
2
In order to analyze the impact of fine-tuning a BERT ranking model with limited training data, we sample standard benchmark datasets to simulate having less data available. Rather than proposing a new model, we use the BERT-MaxP model (Dai and Callan, 2019) due to its simplicity and demonstrated effectiveness on several datasets.To simulate the impact of having limited data, we prepare six different datasets that comprise relevance judgments sampled from the full dataset at a sampling rate r ∈ {0.1, 0.3, 0.5, 0.7, 0.9, 1.0}. The setting r = 1.0 is equivalent to using the full dataset. Specifically, given a dataset with N queries and M relevance judgements, the rsampled dataset contains roughly r×N queries and exactly r × M judgements. That is, queries (along with all their associated judgments) are dropped with a higher priority. This is accomplished by randomly dropping a query until doing so would result in fewer than r × M judgments. When this condition is reached, we loop over the remaining queries, randomly removing one judgment per query until there are exactly r × M judgments remaining. When we split our datasets into training, validation, and test folds for experiments, sampling is applied to only training and validation; we always calculate evaluation metrics using all available judgments.
2
In this section, we focus our discussions on the proposed Domain Confused Contrastive Learning (DCCL) under a sentiment classification scenario. The overall framework of our method is illustrated in Fig. 2 . The model will take source labeled and target unlabeled sentences as input. It will then augment the input data with domain puzzles by fabricating adversarial perturbations. With the augmented data, the next step produces a hidden representation for each instance with an encoder which will be further used to produce three losses to train the entire model, namely sentiment classification loss, contrastive loss and consistency loss.For UDA, Saito et al. (2017) Figure 3: Two sentences sampled from Book and Music reviews. Alternatively, we can match original sentences with its degraded masked versions.Moreover, it may cause negative transfer, deteriorating knowledge transfer from source domain to the target domain . Even if the matched sentences have the same label, due to huge syntactic and semantic shift, instance-based matching strategies that align examples from different domains will introduce noises for pre-trained language models, for example, aligning source domain and target domain sentences in Fig. 3 . Alternatively, we can locate and mask domainspecific tokens which are related to sentence topics and genres. Since sentences in the green box of Fig. 3 become domain-agnostic, we refer to those domain-confused sentences (one cannot tell which domain these sentences belong to) as domain puzzles. Matching distributions between the source domain and the domain puzzles, as well as the target domain and the domain puzzles, will also make language models produce domain invariant representations.However, the domain-specific tokens are not always evident, due to the discrete nature of natural languages, it is challenging to decide correct tokens to mask without hurting the semantics especially when the sentences are complicated 1 . Hence, we seek domain puzzles in the representation space and introduce adversarial perturbations, because we can rely on the model itself to produce diverse but targeted domain puzzles. Note that the purpose of adversarial attack here is not to enhance the robustness, but to construct exquisitely produced perturbations for a better domain invariance in the representation space.To generate domain-confused augmentations, we adopt adversarial attack with perturbations for domain classification. The loss for learning a domain classifier with adversarial attack can be speci- EQUATIONwhere δ 0 is the initialized noise, θ d is the parameter corresponding to the computation of the domain classification, and d is the domain label. Due to additional overhead incurred during fine-tuning large pre-trained language models, the number of iterations for perturbation estimation is usually 1 (Jiang et al., 2020; Pereira et al., 2021) , as shown in Eq. 7. We synthesize the perturbation δ by searching for an extreme direction that perplexes the domain classifier most in the embedding space, and f (x+δ; θ f ) is the crafted domain puzzles encoded by the language model.After acquiring domain puzzles, simply applying distribution matching will sacrifice discriminative knowledge learned from the source domain (Saito et al., 2017; , and instance-based matching will also overlook global intra-domain information. To learn sentiment-wise discriminative representations in the absence of the target labels, we propose to learn domain invariance via contrastive learning (CL). In general, CL benefits from the definition of the augmented positive and negative pairs by treating instances as classes (Chen et al., 2020a; Khosla et al., 2020; Chen et al., 2020b) . Furthermore, the contrastive loss encourages the positive pairs to be close to each other and negative pairs to be far apart. Specifically, maximizing the similarities between positive pairs learns an invariant instance-based representation, and minimizing the similarities between negative pairs learns a uniformly distributed representation from a global view, making instances gathered near the task decision boundary away from each other (Saunshi et al., 2019; Grill et al., 2020 ). This will help to enhance task discrimination of the learned model. For positive pairs, intuitively, we hope that the model could encode the original sentence and most domain-challenging examples to be closer in the representation space, gradually pulling examples to the domain decision boundary as training progresses. For negative sampling, it widens the sentiment decision boundary and promotes better sentiment-wise discriminative features for both domains. However, for cross-domain negative sampling, the contrastive loss may push the negative samples in the target (source) domain away from the anchor in the source (target) domain (see Fig. 4 (b) left). This is contradictory to the objective of domain puzzles which try to pull different domains closer. To avoid the detriment of cross-domain repulsion, excluding samples with different domains from the negative set is of great importance. Therefore, we write the following contrastive infoNCE loss (Chen et al., 2020a) as follow:EQUATIONwhere N is the mini batch size with samples from the same domain,z i = g(f (x i ; θ f )), and g(•)is one hidden layer projection head. We denote x ′ = x + δ as the domain puzzle augmentation, s(•)computes cosine similarity, 1 k̸ =i is the indicator function, and τ is the temperature hyperparameter.Given perturbed embedding x + δ, which is crafted based on domain classification, we also encourage the model to produce consistent sentiment predictions with that of the original instance f (x; θ f , θ y ).Algorithm 1 DCCL Input: For simplicity, θ is the parameter of the whole model. T : the total number of iterations, (x, y) ∼ D S : source dataset with sentiment label y, (x, d) ∼ D S D D : source and target dataset with domain label d, K: the number of iterations for updating δ, σ 2 : the initialized variance, ϵ: perturbation bound, η: the step size, γ: global learning rate, N : batch size, τ : temperature, g(•):one hidden layer projection head. α adv , α, λ and β: weighting factor. 1:for epoch = 1, .., T do 2: for minibatch N do 3:δ ← N (0, σ 2 I) 4:for m = 1, .., K do 5: g adv d ← ∇ δ L(f (x + δ; θ), d) 6: δ ← Π ∥δ∥ F ≤ϵ (δ + ηg adv d /∥g adv d ∥F ) 7: end for 8: L domain ← L(f (x; θ), d) +α adv L(f (x + δ; θ), d) 9: z = g(f (x; θ)) 10: z ′ = g(f (x + δ; θ)) 11: for i = 1, ..., N and j = 1, ..., N do 12: s ′ i = z ⊤ i z ′ j /∥zi∥∥z ′ j ∥ 13: si,j = z ⊤ i zj/∥zi∥∥zj∥ 14: end for 15: Lcontrast ← − 1 N N i log exp(s ′ i /τ ) N j 1 j̸ =i exp(s i,j /τ ) 16: Lconsist ← L(f (x; θ), f (x + δ; θ)) 17: g θ ← ∇ θ L(f (x; θ), y) + α∇ θ L domain +λ∇ θ Lcontrast +β∇ θ Lconsist 18: θ ← θ − γg θ 19:end for 20: end for Output: θ For this, we minimize the symmetric KL divergence, which is formulated as:L consist = L(f (x; θ f , θ y ), f (x + δ; θ f , θ y )). (9)For overall training objective, we train the neural network in an end-to-end manner with a weighted sum of losses as follows.EQUATIONDetails of proposed DCCL are summarized in Algorithm 1.
2
In this section, we introduce PP-Rec for news recommendation which can consider both the personal interest of users and the popularity of candidate news. First, we introduce the overall framework of PP-Rec, as shown in Fig. 2 . Then we introduce the details of each module in PP-Rec, which are shown in Figs. 3, 4 and 5.In PP-Rec, the ranking score of recommending a candidate news to a target user is the combination of a personalized matching score s m and a news popularity score s p . The personalized matching score is used to measure the user's personal interest in the content of candidate news, and is predicted based on the relevance between news content embedding and user interest embedding. The news content embedding is generated by a knowledgeaware news encoder from both news texts and entities. The user interest embedding is generated by a popularity-aware user encoder from the content of clicked news as well as their popularity. The news popularity score is used to measure the time-aware popularity of candidate news, which is predicted by a time-aware news popularity predictor based on news content, recency, and near real-time CTR.First, we introduce the knowledge-aware news encoder, which is shown in Fig. 3 . It learns news representation from both text and entities in news title. Given a news title, we obtain the word embeddings based on word embedding dictionary pretrained on large-scale corpus to incorporate initial word-level semantic information. We also convert entities into embeddings based on pre-trained entity embeddings to incorporate knowledge information in knowledge graphs to our model. There usually exists relatedness among entities in the same news. For example, the entity "MAC" that appears with the entity "Lancome" may indicate cosmetics while it usually indicates computers when appears with the entity "Apple". Thus, we utilize an entity multi-head self-attention network (Vaswani et al., 2017 ) (MHSA) to learn entity representations by capturing their relatedness. Besides, textual contexts are also informative for learning accurate entity representations. For example, the entity "MAC" usually indicates computers if its textual contexts are "Why do MAC need an ARM CPU?" and indicates cosmetics if its textual contexts are "MAC cosmetics expands AR try-on". Thus, we propose an entity multi-head cross-attention network (MHCA) to learn entity representations from the textual contexts. Then we formulate the unified representation of each entity as the summation of its representations learned by the MHSA and MHCA networks. Similarly, we use a word MHSA network to learn word representations by capturing the relatedness among words and a word MHCA network to capture the relatedness between words and entities. Then we build the unified word representation by adding its representations generated by the word MHSA and the word MHCA networks.Since different entities usually contribute differently to news representation, we use an entity attention network to learn entity-based news representation e from entity representations. Similarly, we use a word attention network to learn word-based news representation w from word representations. Finally, we learn the unified news representation n with a weighted combination of e and w via an attention network.Next, we introduce the time-aware news popularity predictor, as shown in Fig. 4 . It is used to predict time-aware news popularity based on news content, recency, and near real-time CTR information. Since popular news usually have a higher click probability than unpopular news, CTR can provide good clue for popular news (Jiang, 2016) . Thus, we incorporate CTR into news popularity prediction. Besides, popularity of a news article usually dynamically changes. Popular news may become less popular as they get out-of-date over time. Thus, we use user interactions in recent t hours to calculate near real-time CTR (denoted as c t ) for news popularity prediction. However, the accurate computation of CTR needs to accumulate sufficient user interactions, which is challenging for those newly published news.Fortunately, news content is very informative for predicting news popularity. For example, news on breaking events such as earthquakes are usually popular since they contain important information for many of us. Thus, besides near real-time CTR, we incorporate news content into news popularity prediction. We apply a dense network to the news content embedding n to predict the contentbased news popularityp c . Since news content is time-independent and cannot capture the dynamic change of news popularity, we incorporate news recency information, which is defined as the duration between the publish time and the prediction time. It can measure the freshness of news articles, which is useful for improving content-based popularity prediction. We quantify the news recency r in hours and use a recency embedding layer to convert the quantified news recency into an embedding vector r. Then we apply a dense network to r to predict the recency-aware content-based news popularityp r . Besides, since different news content usually have different lifecycles, we propose to model time-aware content-based news popularityp fromp c andp r using a content-specific aggregator:p = θ•p c +(1−θ)•p r , θ = σ(W p •[n, r]+b p ), (1)where θ ∈ (0, 1) means the content-specific gate, σ(•) means the sigmoid activation, [•, •] means the Next, we introduce the popularity-aware user encoder in PP-Rec for user interest modeling, which is shown in Fig. 5 . In general, news popularity can influence users' click behaviors, and causes bias in behavior based user interest modeling (Zheng et al., 2010) . Eliminating the popularity bias in user behaviors can help more user interest from user behaviors more accurately. For example, a user may click the news "Justin Timberlake unveils the song" because he likes the songs of "Justin Timberlake", while he may click the news "House of Representatives impeaches President Trump" because it is popular and contains breaking information. Among these two behaviors, the former is more informative for modeling the user interest. Thus, we design a popularity-aware user encoder to learn user interest representation from both content and popularity of clicked news. It contains three components, which we will introduce in details.First, motivated by Wu et al. (2019e), we apply a news multi-head self-attention network to the representations of clicked news to capture their relatedness and learn contextual news representation. Second, we uniformly quantify the popularity of the i-th clicked news predicted by the time-aware news popularity predictor 2 and convert it into an embedding vector p i via popularity embedding. Third, besides news popularity, news content is also useful for selecting informative news to model user interest (Wu et al., 2019a) . Thus, we propose a content-popularity joint attention network (CPJA) to alleviate popularity bias and select important clicked news for user interest modeling, which is formulated as:EQUATIONwhere α i and m i denote the attention weight and the contextual news representation of the i-th clicked news respectively. q and W u are the trainable parameters. The final user interest embedding u is formulated as a weighed summation of the contextual news representations:u = N i=1 α i • m i .In this section, we introduce how we rank the candidate news and train the model in detail. The ranking score of a candidate news for a target user is based on the combination of a personalized matching score s m and a news popularity score s p . The former is computed based on the relevance between user embedding u and news embedding n. Following Okura et al. (2017), we adopt dot product to compute the relevance. The latter is predicted by the time-aware news popularity predictor. In addition, the relative importance of the personalized matching score and the news popularity score is usually different for different users. For example, the news popularity score is more important than the personalized matching score for cold-start users since the latter is derived from scarce behaviors and is usually inaccurate. Thus, we propose a personalized aggregator to combine the personalized matching score and news popularity score:EQUATIONwhere s denotes the ranking score, and the gate η is computed based on the user representation u via a dense network with sigmoid activation. We use the BPR pairwise loss (Rendle et al., 2009) for model training. In addition, we adopt the negative sampling technique to select a negative sample for each positive sample from the same impression. The loss function is formulated as:EQUATIONwhere s p i and s n i denote the ranking scores of the i-th positive and negative sample respectively, and D denotes the training dataset.
2
Following the set-up of the CoNLL shared task in 2009, we consider predicate-argument structures that consist of a verbal or nominal predicate p and PropBank-labelled arguments a i ∈ {a 1 . . . a n }, where each a i corresponds to the head word of the phrase that constitutes the respective argument. Traditional semantic role labelling approaches compute a set of applicable features on each pair p, a i , such as the observed lemma type of a word and the grammatical relation to its head, that serve as indicators for a particular role label. The disadvantage of this approach lies in the fact that indicator features such as word and lemma type are often sparse in training data and hence do not generalize well across domains. In contrast, features based on distributional representations (e.g., raw co-occurrence frequencies) can be computed for every word, given that it occurs in some unlabelled corpus. In addition to this obvious advantage for out-of-domain settings, distributional representations can provide a more robust input signal to the classifier, for instance by projecting a matrix of co-occurrence frequencies to a lower-dimensional space. We hence hypothesize that such features enable the model to become more robust out-of-domain, while providing higher precision in-domain.Although simply including the components of a word representation as features to a classifier can lead to immediate improvements in SRL performance, this observation seems in part counterintuitive. Just because one word has a specific representation does not mean that it should be assigned a specific argument label. In fact, one would expect a more complex interplay between the representation of an argument a i and the context it appears in. To model aspects of this interplay, we define an extended set of features that further includes representations for the combination of p and a i , the set of words in the dependency path between p and a i , and the set of words in the full span of a i . We compute additive compositional representations of multiple words, using the simplest method of Mitchell and Lapata (2010) where the composed representation is the uniformly weighted sum of each single representation. Our full set of feature types based on distributional word representations is listed in Table 1 .
2
In this section we look at classification datasets, discuss details of RoBERTa and ULMFiT models and the classifiers which are trained on top of these language models.Dataset for RoBERTa pre-training. We use synthetically generated code-mixed data for Tamil 5 prepared in (Arora, 2020a) to pretrain RoBERTa from scratch. The dataset is a collection of Tamil sentences written in Latin script. It was prepared by transliterating Tamil Wikipedia articles using Indic-Trans 6 library.Classification datasets. Table 1 shows statistics of datasets of both tasks. We observe that the statistics are fairly consistent across train, valid and test sets. Classification dataset for HSD (Chakravarthi, 2020) has 3 classes whereas that in OLI has 6 classes. Both the classification datasets have significant class imbalance depicting real-world scenarios. Additionally, they contain code-mixed comments/posts in both Latin and native scripts, making the tasks challenging.We take a two-step approach to the problem by pretraining ULMFiT (Howard and Ruder, 2018) and RoBERTa (Liu et al., 2019 ) models on synthetically generated code-mixed language followed by an ensemble of two classifiers which are trained on top of ULMFiT and RoBERTa language models respectively.We use pre-trained ULMFiT model for code-mixed Tamil similar to the one used in (Arora, 2020b at the last layer.RoBERTa model builds on BERT (Devlin et al., 2019) and modifies BERT's key hyperparameters, removes the next-sentence pre-training objective and trains with much larger mini-batches and learning rates. RoBERTa has the same architecture as BERT but it uses a different pre-training scheme and tokenizes text using Byte-Pair Encoding (Sennrich et al., 2016) . We use implementation of RoBERTa in Huggingface's Transformers library 7 to pre-train the model from scratch. We train it for 7 epochs using a learning rate of 5e-5 and a dropout of 0.1 for attention and hidden layers. Table 2 compares perplexity of our pre-trained RoBERTa model with that of ULMFiT model which is also trained on the same code-mixed data.We pre-process the classification datasets of both tasks by transforming comments in native script into Latin script using Indic-Trans library. This step is required because both of our pre-trained language models, ULMFiT and RoBERTa, are trained on code-mixed data in Latin script. We also perform other basic pre-processing steps like lowercasing and removing @username mentions. We did not apply other pre-processing steps such as stop words removal or removal of words that are too short since both of our pre-trained language models 7 https://huggingface.co/transformers/model doc/roberta.html are trained on complete sentences and we wanted the model to figure out on its own if stop/short words are important for classification or not.
2
Memory Graph Networks (MGN) (Section 2.2): Many previous work in QA or MRC systems use memory networks to evaluate multiple answer candidates with transitive reasoning, and typically store all potentially relevant raw sentences or bag-of-symbols as memory slots. However, naive increase of memory slot size or retentionbased sequential update of memory slots often increase search space for answer candidates, leading to poor precision especially for the Episodic Memory QA task. To overcome this issue, with MGN we store memory graph nodes as initial memory slots, where additional contexts and answer candidates can be succinctly expanded and reached via graph traversals. For each (q, m (k) ) pair, MGN predicts optimal memory slot expansion steps:p (k) = {[p (k) e,t ; p (k) n,t ]} Tt=1 for edge paths p e and corresponding node paths p n (Figure 3 ). QA Modules (Section 2.3, 2.4): An estimated answerâ = QA(m, q) is predicted given a query and MGN graph path output from initial memory slots. Specifically, the model outputs a module program {u (k) } for several module networks (e.g. CHOOSE, COUNT, ...) via module selector, each of which produces an answer vector . The aggregated result of module network outputs determines the top-k answers.Query encoder: We represent each textual query with an attention-based Bi-LSTM language model (Conneau et al., 2017) with GloVe (Pennington et al., 2014) distributed word embeddings trained on the Wikipedia and the Gigaword corpus with a total of 6B tokens. Memory encoder: We represent each memory node based on both its structural features (graph embeddings) and contextual multi-modal features from its neighboring nodes (e.g. attribute values).Structural contexts of each memory node (m s ) are encoded via graph embeddings projection approaches (Bordes et al., 2013) , in which nodes with similar relation connectivity are mapped closer in the embeddings space. The model for obtaining embeddings from a MG (composed of subject-relation-object (s, r, o) triples) can be formulated as follows:P (I r (s, o) = 1|θ) = score e(s), e r (r), e(o) (1)where I r is an indicator function of a known relation r for two entities (s,o) (1: valid relation, 0: unknown relation), e is a function that extracts embeddings for entities, e r extracts embeddings for relations, and score(•) is a function (e.g. multilayer perceptrons) that produces a likelihood of a valid triple.For contextual representation of memories (m c ), we compute attention-weighted sum of textual representation of neighboring nodes and attributes (connected via r j ∈ R), using the same language model as the query encoder:m c = γ j m c,j γ = σ(W qγ q)Note that the query attention vector γ attenuates or amplifies each attribute of memory based on a query vector to better account for query-memory compatibility accordingly. We then concatenate the structural features with semantic contextual features to obtain the final memory representation (m = [m s ; m c ]).Inspired by the recently introduced graph traversal networks (Moon et al., 2019) which output discrete graph operations given input contexts, we formulate our MGN as follows. Given a set of initial memory slots (m) and a query (q), the MGN model outputs a sequence path of walk steps (p) within MG to attend to relevant nodes or expand initial memory slots ( Figure 3 ):EQUATIONSpecifically, we define the attention-based graph decoder model which prunes unattended paths, which effectively reduce the search space for memory expansion. We formulate the decoding steps for MGN as follows (bias terms for gates are omitted for simplicity of notation):EQUATIONwhere z t is a context vector at decoding step t, produced from the attention over graph relations which is defined as follows:EQUATIONwhere α t ∈ R |R| is an attention vector over the relations space, r k is relation embeddings, and z t is a resulting node context vector after walking from its previous node on an attended path.The graph decoder is trained with the groundtruth walk paths by computing the combined loss of L walk (m, q, p) = i,t L e + L n between predicted paths and each of {p e , p n }, respectively (L e : loss for edge paths, and L n for node paths):p e =p (i) e,t max[0,p e • p e,t (i) −α t r • (p (i) e,t −ỹ e ) ] + p n =p (i) n,t max[0,p n • p n,t (i) −h t (i) • (p (i) n,t −p n ) ]At test time, we expand the memory slots by activating the nodes along the optimal paths based on the sum of their relevance scores (left) and softattention-based output path scores (right) at each decoding step:EQUATIONMGN outputs are then passed to module networks for the final stage of answer prediction. We extend the previous work in module networks (Kottur et al., 2018) , often used in VQA tasks, to accommodate for graph nodes output via MGN. We first formulate the module selector which outputs the module label probability {u (k) } given input contexts for each memory node, trained with cross-entropy loss L module :{u (k) } = Softmax(MLP(q, {m (k) })) (6)We then define the memory attention to attenuate or amplify all activated memory nodes based on their compatibility with query, formulated as follows:EQUATIONFor this work, we propose the following four modules: CHOOSE, COUNT, CONFIRM, SET OR, and SET AND, hence u (k) ∈ R 5 . Note that the formulation can be extended to the auto-regressive decoder in case sequential execution of modules is required.CHOOSE module outputs answer space vector by assigning weighted sum scores to nodes along the MGN soft-attention walk paths. End nodes with the most probable walk paths thus get the highest scores, and their node attribute values are considered as answer candidates. COUNT module counts the query-compatible among the activated nodes, a = W K ([α; max{α}; min{α}]). CONFIRM uses a similar approach to COUNT, except it outputs a binary label indicating whether the memory nodes match the query condition: a = W b ([α; max{α}; min{α}]). SET modules either combine or find intersection among answer candidates by updating the answer vectors with a = max{W s {a (k) }} or a = min{W s {a (k) }}.Answers from each module network (Section 2.3 are then aggregated as weighted sum of answer vectors with module probability (Eq.6), guided by memory attention (Eq.7). Predicted answers are evaluated with cross-entropy loss L ansWe observe that the model performs better when the MGN component of the model is pre-trained with ground-truth paths. We thus first train the MGN network with the same training split (without answer labels), and then train the entire model with module networks, fully end-to-end supervised with L = L walk + L module + L ans .
2