Loading
23.7k
2603
384
3718

 

(April 1st, 2024) ๐Ÿš€ Submissions are now open.
And our ๐Ÿš€Starter Kit is available to help you quickly onboard and make the first submission.

A RAG QA system takes a question Q as input and outputs an answer A; the answer is generated by LLMs according to information retrieved from external sources, or directly from the knowledge internalized in the model. The answer should provide useful information to answer the question, without adding any hallucination or harmful content such as profanity.

Task #1: Retrieval Summarization.

In this task, you are provided with up to five web pages for each question. While these web pages are likely, but not guaranteed, to be relevant. The objective of this task is to evaluate the answer generation capabilities of the RAG (Retrieval-Augmented Generation) systems.

To download the data, please see: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024/problems/retrieval-summarization/dataset_files.

To know more about the CRAG challenge, please see: https://www.aicrowd.com/challenges/meta-comprehensive-rag-benchmark-kdd-cup-2024.

Getting Started