Skip to content

codeboy5/stamp

Repository files navigation

STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings

The repo contains the official code for the ICML 25 paper STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings by Saksham Rastogi, Pratyush Maini, and Danish Pruthi.

Overview

Given how large parts of publicly available text are crawled to pretrain large language models (LLMs), data creators increasingly worry about the inclusion of their proprietary data for model training without attribution or licensing. Their concerns are also shared by benchmark curators whose test-sets might be compromised. In this paper, we present STAMP, a framework for detecting dataset membership—i.e., determining the inclusion of a dataset in the pretraining corpora of LLMs. Given an original piece of content, our proposal involves first generating multiple rephrases, each embedding a watermark with a unique secret key. One version is to be released publicly, while others are to be kept private. Subsequently, creators can compare model likelihoods between public and private versions using paired statistical tests to prove membership. We show that our framework can successfully detect contamination across four benchmarks which appear only once in the training data and constitute less than 0.001% of the total tokens, outperforming several contamination detection and dataset inference baselines. We verify that STAMP preserves both the semantic meaning and utility of the original data. We apply STAMP to two real-world scenarios to confirm the inclusion of paper abstracts and blog articles in the pretraining corpora.

Setup

To install the necessary packages, first create a conda environment.

conda create -n <env_name> python=3.10
conda activate <env_name>

Then, install the required packages with

pip install -r requirements.txt

Artifacts

We provide the following artifacts for future research and reproducibility:

Models

Below are the links to trained models (continual pretraining on contaminated data) from the paper's experiments (hosted on huggingface). They can also be found at this Hugging Face Collection.

Pythia 1B models contaminated with benchmarks

Datasets

  • The benchmarks folder contains all the test files used to produce the paper's results, including both original and rephrased versions for the following four datasets:

Acknowledgements

We heavily rely on the following repos in our paper:

  1. LLM Dataset Inference
  2. MarkLLM

Issues

If you have any questions, feel free to open an issue on GitHub or contact Saksham ([email protected]).

Reference

If you find this repo useful, please consider citing:

@misc{rastogi2025stampcontentprovingdataset,
      title={STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings}, 
      author={Saksham Rastogi and Pratyush Maini and Danish Pruthi},
      year={2025},
      eprint={2504.13416},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2504.13416}, 
}

About

Code for the Paper: "STAMP Your Content: Proving Dataset Membership via Watermarked Rephrasings"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published