NIST launches a new Model to assess generative AI content
The National Institute of Standards and Technology (NIST), announced their new GenAI program. With this new NIST GenAI model they are aiming to create a platform that can successfully identify AI-generated content including text, audio, and images.
NIST released their Press Conference on 29 April in which they announced the launch of this new NIST GenAI model. They are about to release a draft among which this is the first draft on AI risks and standards.
Aim of new NIST GenAI mode
NIST has explained in their press release about this new model. They said,” The NIST GenAI program will issue a series of challenge problems [intended] to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release reads. “These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”
Launching this new GenAI model, NIST has made its objectives very clear to the users. They have mainly introduced this model for a safer use of AI. It will now be easier for the users to identify AI-generated content. The major objectives of this model as per NIST are:
- This model will evolve benchmark dataset creation.
- Development of content authenticity detection technologies for various formats including text, audio, image, video, code, and others.
- Will conduct a comparative analysis using relevant metrics, and
- Promoting the development of technologies for identifying the source of fake or misleading information.
Also read: TikTok is about to add AI avatars that can make ads
This first project of the NIST GenAI model is a pilot study so that they can develop different systems to identify the difference between human-generated content and AI-generated content. With the increasing use of advanced AI technologies that is something that is genuinely required.
NIST welcomes teams from academia, industry, and research labs so that they can submit their “generators” or “discriminators” to them. Generators are basically the AI systems that can generate AI content and discriminators are used to identify the AI-generated content and human content. The registration window for both will be open in May the last date for round 1 submission is in August.
“Generators in the study must generate 250-word-or-fewer summaries provided a topic and a set of documents, while discriminators must detect whether a given summary is potentially AI-written. To ensure fairness, NIST GenAI will provide the data necessary to test the generators. Systems trained on publicly available data and that don’t [comply] with applicable laws and regulations” won’t be accepted,” NIST says.
The final results for the same are expected to be published in February 2025.