Datasets:
NEPABench
A comprehensive benchmark suite for evaluating large language models on tasks related to the National Environmental Policy Act (NEPA) environmental review and permitting process.
Citation
If you use NEPABench, please cite:
@misc{acharya2026nepabench,
author={Anurag Acharya and Rounak Meyur and Sai Koneru and Kaustav Bhattacharjee and Bishal Lakha and Alexander C. Buchko and Reilly P. Raab and Hung Phan and Koby Hayashi and Dan Nally and Mike Parker and Sai Munikoti and Sameera Horawalavithana},
title={NEPABench: A Benchmark Suite for Environmental Permitting},
year={2026},
institution={Pacific Northwest National Laboratory},
note={Preprint},
}
Overview
NEPABench covers nine distinct tasks spanning the full lifecycle of NEPA document analysis β from processing public comments and extracting tribal consultation data, to extracting structured metadata from regulatory documents and answering questions over EIS content. The benchmark draws from real-world government documents including Categorical Exclusions, Environmental Assessments, Environmental Impact Statements, Federal Register notices, and public comment records across multiple federal agencies and project types.
Getting Started
To run and evaluate this benchmark, you will need MAPLEv2, a tool designed for evaluating language models on environmental permitting tasks. See individual task READMEs in each task folder for dataset-specific usage instructions.
Repository Structure
nepabench/
βββ BinAssign/ # Task 1: Classify public comments into predefined bins
βββ BinSummarize/ # Task 2: Summarize grouped public comments
βββ CommentDelineate/ # Task 3: Extract key quotes from comment documents
βββ TribeBench/ # Task 4: Extract tribal names from EIS sections
βββ CXBench/ # Task 5: Extract metadata from Categorical Exclusion documents
βββ EABench/ # Task 6: Extract metadata from Environmental Assessment documents
βββ EISBench/ # Task 7: Extract metadata from Environmental Impact Statement documents
βββ FedRegBench/ # Task 8: Extract structured data from Federal Register notices
βββ nepaquad/ # Task 9: Question answering over EIS documents
βββ prompts/ # Prompt templates for all tasks
βββ metrics/ # Evaluation metric configurations
Tasks
Comment Analysis
| Task | Folder | Entries | Description |
|---|---|---|---|
| Bin Assignment | BinAssign/ |
~17,500 | Classify a public comment into the most appropriate predefined category bin for a given project |
| Bin Summarization | BinSummarize/ |
~1,780 | Generate a narrative summary synthesizing all comments grouped under a single bin |
| Comment Delineation | CommentDelineate/ |
~1,927 | Extract verbatim key quotes capturing the substantive points from a full comment document |
| Tribal Name Extraction | TribeBench/ |
β | Extract Native American tribal names mentioned within a specified section of an EIS or regulatory document |
Information Extraction
| Task | Folder | Entries | Description |
|---|---|---|---|
| CX Extraction | CXBench/ |
1,400 | Extract 7 metadata fields (Year, Date, Location, Program/Field Office, CX codes, etc.) from Categorical Exclusion documents |
| EA Extraction | EABench/ |
257 | Extract 12 metadata fields (Project Title, Lead Agency, Location, Process Type, etc.) from Environmental Assessment documents |
| EIS Extraction | EISBench/ |
1,365 | Extract 14 metadata fields from Environmental Impact Statement documents and appendices |
| Federal Register Extraction | FedRegBench/ |
50 | Extract a rich nested structure (Notice, Project, Document, Comments, Meeting, Contact) from Federal Register notices |
Question Answering
| Task | Folder | Entries | Description |
|---|---|---|---|
| NEPAQuAD | nepaquad/ |
1,589 | Answer questions over EIS documents across 10 question types, with and without provided context |
Projects Covered
The comment analysis tasks (BinAssign, BinSummarize, CommentDelineate) draw from four distinct regulatory proceedings:
| Project ID | Full Name | Agency |
|---|---|---|
| CFFF | Westinghouse Fuel Fabrication Facility | Nuclear Regulatory Commission (NRC) |
| CPNP | California Project | Nuclear Regulatory Commission (NRC) |
| MBTA | Massachusetts Bay Transit Authority | MBTA |
| WS | Western Solar Plan | Bureau of Land Management (BLM) |
Prompts
The prompts/ folder contains prompt templates for all nine tasks. Each template uses named placeholders filled at inference time. See prompts/README.md for the full variable reference.
| Prompt File | Task |
|---|---|
bin_assign.txt |
BinAssign |
bin_summary.txt |
BinSummarize |
comment_delineate.txt |
CommentDelineate |
tribe_extract.txt |
TribeBench |
cxie_prompt.txt |
CXBench |
eaie_prompt.txt |
EABench |
eisie_prompt.txt |
EISBench |
sie_prompt.txt |
FedRegBench |
no_context.txt / with_context.txt |
NEPAQuAD |
Evaluation Metrics
The metrics/ folder contains metric configurations for the information extraction tasks. See metrics/README.md for the full field-to-metric mapping.
Metrics include:
- String similarity: word edit distance, character edit distance, fuzzy matching, semantic embedding similarity, abbreviation-aware similarity
- Set-based: exact precision/recall/F1 (for CX codes), soft precision/recall/F1 with embedding similarity (for agency lists)
- Numeric: numerical error (for Year)
Contact
We welcome feedback and suggestions. Please email us at [email protected].
Acknowledgement
This work was supported by the Office of Policy, U.S. Department of Energy, and Pacific Northwest National Laboratory, which is operated by Battelle Memorial Institute for the U.S. Department of Energy under Contract DE-AC05β76RLO1830.
Disclaimer
This material was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the United States Department of Energy, nor the Contractor, nor any of their employees, nor any jurisdiction or organization that has cooperated in the development of these materials, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, software, or process disclosed, or represents that its use would not infringe privately owned rights.
Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or Battelle Memorial Institute. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
PACIFIC NORTHWEST NATIONAL LABORATORY
operated by
BATTELLE
for the
UNITED STATES DEPARTMENT OF ENERGY
under Contract DE-AC05-76RL01830
LICENSE
Copyright Battelle Memorial Institute 2026
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- Downloads last month
- 2