Submission instructions and review process

Instructions for preparing submissions

  • For the the three tracks, submissions should be short papers (extended abstracts) up to 4 pages in PDF format, typeset using the NeurIPS paper template for the Research and Perspectives tracks, and the NeurIPS Dataset template for the Datasets & Benchmarks track.
  • The review process will happen on OpenReview and submissions shall be submitted on the workshop OpenReview page.
  • Unlimited references are allowed and do not count against the 4-page limit.
  • Appendices are discouraged, and reviewers will not be required to read beyond the first 4 pages.
  • Workshop organizers retain the right to reject submissions for editorial reasons: for example, any paper surpassing the page limitation or modifying the NeurIPS template will be desk rejected.
  • We invite authors to follow the guidelines and best practices from the NeurIPS conference (see also the main conference datasets and benchmarks call for guidelines pertaining to the Dataset Track). A checklist or broader impact statement is not required to be included with the submission. You are however welcome to add a short broader impact statement, which will not count towards the 4-page submission limit.
  • All authors must have active OpenReview profiles and be registered as authors at the time of submission. We will not allow authors to be added after the review process has begun.
  • Submissions will be kept confidential until they are accepted and until authors confirm that they can be included in the workshop. If a submission is not accepted, or withdrawn for any reason, it will be kept confidential and not made public.

Instructions for Datasets & Benchmarks track

The following additionally apply to this track.

  • In line with the main-track Datasets & Benchmarks call, since full anonymization can be difficult for datasets, it will not be required for this track.
  • Availability of dataset(s): The dataset(s) must be publicly available at the time of the workshop (e.g., via Zenodo). The submission must also include baseline results and public code (the baselines need not use machine learning). Additional data artifacts (e.g. a public simulator) may also be included and described in the submission.
  • The submitted paper should describe the following:
    • Properties of the dataset/benchmark;
    • The scientific and/or computational challenges addressed by releasing the dataset/benchmark;
    • Existing methods and/or potential solutions that ML could provide.

Review process

Submissions that follow the submission instructions correctly (i.e., are not rejected due to editorial reasons, such as exceeding the page limit or tampering with the template format) are sent for double-blind peer review. Below are some of the key points about this process that are shared with the reviewers and authors alike. Authors are expected to consider these in preparation of their submissions and when deciding to apply for the reviewer role.

  • Papers are 4 pages long. Appendices are accepted but discouraged; the reviewers will not be required to read the appendices.
  • There will be multiple reviewers for each paper.
  • Reviewers will be able to state their confidence in their review.
  • We will provide an easy-to-follow template for reviews so that both the pros and the cons of the submission can be highlighted.
  • Paper matching will be done via the OpenReview system
  • Potential conflicts of interest based on institution and author collaboration are addressed through the OpenReview system.
  • Criteria for a successful submission include: novelty, correctness, relevance to the field, at the intersection of ML and physical sciences, and showing promise for future impact. Negative or null results that add value and insight are welcome.
  • There will be no rebuttal period. Minor flaws will not be the sole reason to reject a paper. Incomplete works at an advanced progress stage are welcome.