Fact-checked by Grok 2 weeks ago
References
-
[1]
[PDF] Introduction to the Special Issue on Natural Language GenerationNatural language generation is the process of deliberately constructing a natural language text in order to meet specified communicative goals. A more recent ...
-
[2]
[PDF] What is NLG? - ACL AnthologyGiving an adequate general definition of the input to natural language generation. (NLG), and hence to NLG itself, is a noto-.
-
[3]
[PDF] The Natural Language Generation Pipeline, Neural Text ... - HALDec 8, 2020 · In this short paper, we surveyed a number of neural data-to-text generation models which im- plement some or all of the NLG pipeline sub-tasks.
-
[4]
[PDF] Large Language Models: A Survey - arXivAbstract—Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks, ...
-
[5]
[PDF] Natural Language Generation in the context of the Semantic WebThis paper provides an overview of natural language generation approaches in the context of the Semantic Web. A review like this is clearly very useful and ...
-
[6]
[PDF] A Generative Model for Joint Natural Language Understanding and ...Natural language understanding (NLU) and natural language generation (NLG) are two fundamental and related tasks in building task-oriented dialogue systems ...
-
[7]
[PDF] Measuring Attribution in Natural Language Generation ModelsLarge neural models have brought a new challenge to natural language generation (NLG): It has become imperative to ensure the safety and reliability of the ...
-
[8]
[PDF] Building Natural Language Generation - Macquarie UniversityThis book describes NATURAL LANGUAGE GENERATION (NLG), which is a sub- field of artificial intelligence and computational linguistics that is concerned with ...
-
[9]
NLP - overview - Stanford Computer ScienceThe field of natural language processing began in the 1940s, after World War II. At this time, people recognized the importance of translation from one ...
-
[10]
The Key to the Selection Problem in Natural Language GenerationMcDonald. 1982. Salience: The Key to the Selection Problem in Natural Language Generation. In 20th Annual Meeting of the Association for Computational ...Missing: JSAL | Show results with:JSAL
-
[11]
[PDF] Natural Language Generation - DTICPenman is a natural language sentence generation program being developed at USC/ISI (the. Information Sciences Institute of the University of Southern ...
-
[12]
[PDF] Rhetorical structure theory: Toward a functional theIn Mann and Thompson (1988), an unabridged version of this paper, the definitional uses of the following terms are discussed: nouns text span, reader, writer, ...
-
[13]
International Natural Language Generation Conference (INLG)INLG'2000 Proceedings of the First International Conference on Natural Language Generation 41 papers. 1996 · Eighth International Natural Language Generation ...
-
[14]
[PDF] Building Applied Natural Language Generation SystemsNatural Language Generation (NLG) is about building systems that produce understandable texts from non-linguistic data, including tasks like content ...
-
[15]
[PDF] Using Argumentation to Control Lexical Choice: A Functional ...This thesis presents the FUF generation formalism. FUF is a declarative formalism derived from the Functional Unification Grammars formalism which I have ...
-
[16]
Trainable Methods for Surface Natural Language GenerationAdwait Ratnaparkhi. 2000. Trainable Methods for Surface Natural Language Generation. In 1st Meeting of the North American Chapter of the Association for ...
-
[17]
Sequence to Sequence Learning with Neural Networks - arXivSep 10, 2014 · In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure.
-
[18]
[1706.03762] Attention Is All You Need - arXivJun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
-
[19]
The WebNLG Challenge: Generating Text from RDF DataThe WebNLG challenge consists in mapping sets of RDF triples to text. It provides a common benchmark on which to train, evaluate and compare “microplanners”.
-
[20]
The E2E Dataset: New Challenges For End-to-End Generation - arXivJun 28, 2017 · This paper describes the E2E data, a new dataset for training end-to-end, data-driven natural language generation systems in the restaurant domain.Missing: 2018 | Show results with:2018
-
[21]
BERT: Pre-training of Deep Bidirectional Transformers for Language ...BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.
-
[22]
Separating Planning from Realization in Neural Data-to-Text ... - arXivWe propose to split the generation process into a symbolic text-planning stage that is faithful to the input, followed by a neural generation stage that ...
-
[23]
Step-by-Step: Separating Planning from Realization in Neural Data ...Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation. In Proceedings of the ...
-
[24]
Plug and Play Language Models: A Simple Approach to Controlled ...We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more ...
-
[25]
Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksMay 22, 2020 · We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) -- models which combine pre-trained parametric and non-parametric ...
-
[26]
Learning Transferable Visual Models From Natural Language ...The paper proposes learning visual models by predicting image-caption pairs, then using natural language for zero-shot transfer to downstream tasks.
-
[27]
Five sources of bias in natural language processing - PMCData augmentation by controlling the gender attribute is an effective technique in mitigating gender bias in NLP processes (Dinan et al., 2020; Sun et al., 2019) ...
-
[28]
[PDF] Retrieval-Augmented Generation for Knowledge-Intensive NLP TasksWe fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming ...
-
[29]
Building Natural Language Generation Systems30-day returnsBuilding Natural Language Generation Systems. Building Natural Language Generation Systems ... Ehud Reiter, University of Aberdeen, Robert Dale, Macquarie ...
-
[30]
Artificial Intelligence | Natural Language Generation - GeeksforGeeksJul 9, 2025 · How does NLG work · Content Determination: The system decides which information from the input data is relevant and should be mentioned.
-
[31]
[PDF] Rhetorical Structure Theory: - A Theory of Text OrganizationFinally, RST provides a framework for investigating Relational Propositions, which are unstated but inferred propositions that arise from the text structure in ...
-
[32]
[PDF] Statistical Acquisition of Content Selection Rules for Natural ...These examples illustrate that content selection rules should capture cases where an attribute should be included only under certain conditions; that is ...
-
[33]
Survey of the State of the Art in Natural Language GenerationJan 27, 2018 · This paper surveys Natural Language Generation (NLG), which is generating text or speech from non-linguistic input, and its core tasks.Missing: pipeline determination
-
[34]
A Comprehensive Review of Handling Missing Data - arXivApr 7, 2024 · This article reviews existing literature on handling missing values. It compares and contrasts existing methods in terms of their ability to handle different ...
-
[35]
A survey on missing data in machine learning | Journal of Big DataOct 27, 2021 · In this paper, we aggregate some of the literature on missing data particularly focusing on machine learning techniques.Missing: NLG | Show results with:NLG<|control11|><|separator|>
-
[36]
[PDF] Natural Language Generation from GraphsThis NLG engine used canned text, templates, and grammar rules to produce texts and hypertexts. • Helping technical authors produce instructions for using ...
-
[37]
None### Summary of Implications of TAG for NLG, Focusing on Syntactic Realization
-
[38]
An Overview of SURGE: a Reusable Comprehensive Syntactic ...This paper describes surge, a syntactic realization front-end for natural language generation systems. By gradually integrating complementary aspects of ...Missing: seminal | Show results with:seminal
-
[39]
[PDF] Robust, applied morphological generation - ACL AnthologyAs an individual module, the morphological generator will be more easily shareable between several different NLG appli- cations, and integrated into new ones.
-
[40]
[PDF] Adapting Chart Realization to CCGWe describe a bottom-up chart re- alization algorithm adapted for use with Combinatory Categorial Grammar. (CCG), and show how it can be used to.
-
[41]
[PDF] Using Integer Linear Programming for Content Selection ...Our method is the first one to consider content selection, lexicalization, and sentence aggregation as an ILP joint optimization problem in the context of multi ...
-
[42]
Acquiring Input Features from Stock Market Summaries: A NLG ...Aug 7, 2025 · Bold text: Market summary that refers to long-term market data. In this work, we present WSJ-Markets, a financial data-to-text generation.
-
[43]
[PDF] Abstract Meaning Representation for Sembanking - ACL AnthologyWe describe Abstract Meaning Represen- tation (AMR), a semantic representation language in which we are writing down the meanings of thousands of English ...
-
[44]
[PDF] Atlas.txt: Linking Geo-referenced Data to Text for NLG - ACL AnthologyGeo-referenced data which are often communicated via maps are inaccessible to the visually impaired popula- tion. We summarise existing approaches to improving.
-
[45]
[PDF] SportSett:Basketball - A robust and maintainable dataset for Natural ...In this resource paper, we introduce the Sport-. Sett:Basketball database1. This easy-to-use re- source allows for simple scripts to be written which ...
-
[46]
[2004.13637] Recipes for building an open-domain chatbot - arXivAbstract:Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural ...Missing: Facebook | Show results with:Facebook
-
[47]
Distilling the Knowledge of Large-scale Generative Models into ...Aug 28, 2021 · On the other hand, retrieval models could return responses with much lower latency but show inferior performance to the large-scale generative ...
-
[48]
MultiWOZ -- A Large-Scale Multi-Domain Wizard-of-Oz Dataset for ...Sep 29, 2018 · MultiWOZ is a large, fully-labeled dataset of 10k human-human written conversations across multiple domains, used for task-oriented dialogue ...
-
[49]
[1411.4555] Show and Tell: A Neural Image Caption Generator - arXivNov 17, 2014 · Show and Tell: A Neural Image Caption Generator. Authors:Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan.
-
[50]
COCO datasetCOCO is a large-scale object detection, segmentation, and captioning dataset. COCO has several features.Dataset · People · Tasks · Evaluate
-
[51]
The Unreasonable Effectiveness of Recurrent Neural NetworksMay 21, 2015 · This post is about sharing some of that magic with you. We'll train RNNs to generate text character by character and ponder the question “how is that even ...
-
[52]
A Computational Model of Linguistic Humor in Puns - PMCOur work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor ...
-
[53]
[PDF] Computational Creativity in Meme Generation: A Multimodal ApproachMar 23, 2023 · In this paper, we explore computational creativity in internet meme generation, employing a multimodal framework that integrates natural ...
-
[54]
Generative AI enhances individual creativity but reduces ... - ScienceJul 12, 2024 · These results point to an increase in individual creativity at the risk of losing collective novelty. This dynamic resembles a social dilemma: ...
-
[55]
How to Build AI Speech-to-Text and Text-to-Speech Accessibility ...Sep 1, 2025 · This is where AI-driven accessibility tools can make a difference. From real-time captioning to adaptive reading support, artificial ...Missing: NLG | Show results with:NLG
-
[56]
[PDF] BLEU: a Method for Automatic Evaluation of Machine TranslationBLEU is a method for automatic machine translation evaluation, measuring closeness to human translations using a weighted average of phrase matches. It is ...
-
[57]
ROUGE: A Package for Automatic Evaluation of SummariesCite (ACL):: Chin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out, pages 74–81, ...
-
[58]
Human evaluation of automatically generated text: Current trends ...This paper provides an overview of how (mostly intrinsic) human evaluation is currently conducted and presents a set of best practices, grounded in the ...
-
[59]
BERTScore: Evaluating Text Generation with BERT - arXivWe propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token.
-
[60]
The Second Multilingual Surface Realisation Shared Task (SR'19)We report results from the SR'19 Shared Task, the second edition of a multilingual surface realisation task organised as part of the EMNLP'19 Workshop.Missing: paper | Show results with:paper
-
[61]
Evaluating Amazon's Mechanical Turk as a Tool for Experimental ...Amazon Mechanical Turk (AMT) is an online crowdsourcing service where anonymous online workers complete web-based tasks for small sums of money.
-
[62]
Survey of Hallucination in Natural Language Generation - arXivFeb 8, 2022 · In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG.
-
[63]
[PDF] On the Risk of Misinformation Pollution with Large Language ModelsDec 6, 2023 · Our results show that (1) LLMs are excellent controllable misinformation generators, making them prone to potential misuse (§ 3), (2) deliber-.
-
[64]
LLM-based NLG Evaluation: Current Status and ChallengesIn this survey, we first give a taxonomy of LLM-based NLG evaluation methods, and discuss their pros and cons, respectively.
-
[65]
LLMs for XAI: Future Directions for Explaining Explanations - arXivMay 9, 2024 · We investigate the use of Large Language Models (LLMs) to transform ML explanations into natural, human-readable narratives.
-
[66]
Natural language generation for social robotics: opportunities and ...Mar 11, 2019 · This article summarizes the state of the art in the two individual research areas of social robotics and natural language generation.
-
[67]
[PDF] Personalized Generation In Large Model Era: A SurveyJul 27, 2025 · Fundamentally, PGen entails user modeling based on various personalized con- texts and multimodal instructions, extracting per- sonalized ...
-
[68]
[PDF] Most NLG is Low-Resource: here's what we can do about itIn this position paper, we initially present the challenges researchers & developers of- ten encounter when dealing with low-resource settings in NLG. We then ...
-
[69]
[PDF] Ethics Issues in Natural Language Generation SystemsWith the advent of big data, there is increasingly a need to distill information computed from these datasets into automated summaries and reports.