lobiplease.blogg.se

Times of india news headline
Times of india news headline





In the early 1980s, natural language generation gradually became a hot research field.

times of india news headline times of india news headline

Compared with the original news text, it is found that the forum not only refers to the poor living environment of private enterprises but also how to improve their competitiveness in the case of large environmental changes. Outputs 3 and 4 choose two salient semantic pieces of semantic information as the output of the headline generation: “the living environment of private enterprises is getting worse” and “the competitiveness of enterprises to promote”, respectively. It can be observed that the TD-NHG model filters the semantic information in the original news, abandon the explicit information of “When Ren Zhiqiang persisted in his role as a reporter for developers” in the original news, and selects salient semantic information which is more important to the context semantic information, as the output of the generation. 1, when the news is condensed, the extractive news headline generation directly extracts some semantic information. Moreover, abstractive methods do not specifically process nonimportant or sub important text that is, some nonimportant semantic information will be preserved with the same importance as feature semantic information when generating headlines, and there is noise interference.įor example, as shown in Fig. As the recurrent neural network has the sequence coding characteristic that the information previously input will be gradually forgotten as time goes by, the intermediate semantics lack some significant information 19, which leads to the headlines generated in the decoding process deviating from the main idea of the news text. The abstractive headline generation method can produce words that are not found in the original text, but this method may also make the generated news headlines out of the original facts 18. The model based on a transformer effectively solves the problem of insufficient parallel ability of sequence-to-sequence models. It is worth emphasizing that, the abstractive neural network model based on encoder-decoder 12, 13, 14, 15, 16 has been proved to have a good performance on LCSTS dataset 17, DUC-2004 dataset, and other data sets. In recent years, encoder-decoder-based neural network models have been widely used in text summarization, mechanical fault detection 11, etc. Since the neural network method has been applied to news headline generation, the neural network-based abstractive news headline generation model 7, 8, 9, 10 has recently shown great performance. The abstractive uses advanced natural language processing algorithms to generate news headlines using techniques such as paraphrasing, synonymous substitutions, and sentence contractions. The extractive directly selects several important words from the news text and rearranges them to form a news headline 6. NHG model can be divided into two categories: extractive and abstractive. News headline generation (NHG) 1, 2, 3, 4, 5 has been an important task in natural language processing (NLP), in recent years. The experimental results demonstrate that the proposed method can improve the accuracy and diversity of news headlines. We conducted a comparative experiment on the LCSTS dataset and CSTS dataset. The TD-NHG uses masked multi-head self-attention to learn the feature information of different representation subspaces of news texts and uses decoding selection strategy of top-k, top-p, and punishment mechanisms ( repetition-penalty) in the decoding stage.

times of india news headline

In this work, we propose a TD-NHG model, which stands for news headline generation based on an improved decoder from the transformer. It is difficult to select the important words in news and reproduce these expressions, resulting in the headline that inaccurately summarizes the news.

times of india news headline

Most of the news headline generation models that use the sequence-to-sequence model or recurrent network have two shortcomings: the lack of parallel ability of the model and easily repeated generation of words.







Times of india news headline