29th INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING
AND COMPUTER ASSISTED INTERVENTION
4-8 OCTOBER 2026ADNEC CENTRE/ABU DHABI

REVIEWER GUIDELINES

This document provides detailed guidelines for MICCAI 2026 reviewers. It is important to summarize what makes a good MICCAI review and outline some expectations for you as a reviewer. We also include the rules that MICCAI 2026 adopts for paper anonymization as part of its double-blind peer review process. Please read these guidelines as part of the overall MICCAI 2026 review process document.

Please be aware that reviews of accepted papers will be made public (without disclosing the reviewers' identity), together with author responses and Area Chair meta-reviews.

Reviewers will be acknowledged in the conference proceedings. The top reviewers will be offered free registration to attend MICCAI 2026.

1. What Makes a Good Review

The role of a reviewer is to identify excellent papers that the MICCAI community must hear about, and tell the program committee which papers are of wide interest and could have a great impact on the field. A good review presents an informed expert assessment of the paper and supports it with details on the strengths and weaknesses of the paper.

The components of the reviewing form are as follows:

  • A summary of the paper, which can be as short as a few sentences. This section explains the major contributions, what the authors did, how they did it, and the results. It also helps authors to verify that the reviewer understood their approach and interpretation of the results.
  • The assessment of the reviewer about the major strengths of the paper. A reviewer should write about a novel formulation, demonstration of clinical feasibility, an original way to use data, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Provide details that justify your assessment. For instance, if a method is novel, explain what aspect is novel and why this is interesting.
  • The assessment of the reviewer about the major weaknesses of the paper. Provide a list of points that summarize your concerns about particular aspects of the paper. Provide details that justify your assessment. For instance, if a method is not novel, provide citations to prior work.
  • The assessment of the reviewer about the clarity of presentation, paper organization and other stylistic aspects of the paper. It is important to know whether the paper is very clear and a pleasure to read, or hard to understand. Provide details that justify your assessment. For instance, detail whether the paper is hard to read because of its technical level, or because of suboptimal organization.
  • Comment on the reproducibility of the paper. Where possible, we encourage authors to use open data or to make their data and code available for open access by other researchers. We understand that due to certain restrictions, some researchers are unable to release their proprietary dataset and code; therefore, a clear and detailed description of the algorithm, its parameters, and the dataset is highly valuable. Please provide comments about whether the paper provides sufficient details about the models/algorithms, datasets, and evaluation.
  • Detailed constructive comments should be provided to help the authors improve their paper or expand it into a journal version. Comments should be backed up by detailed arguments. Minor problems, such as grammatical errors, typos, and other problems that can be easily fixed by carefully editing the text of the paper, should also be listed.
  • Your recommendation whether to accept or reject the paper: Taking into account all points above, should this paper be presented at the conference? Is it an interesting contribution? Is it a significant advance for the field? Is the paper of sufficiently high clinical impact to outweigh a lower degree of methodological innovation? Please remember that a novel algorithm is only one of many ways to contribute. Other examples include (but are not limited to) a novel interventional system, an application of existing methods to a new problem, and new insights into existing methods. A paper would make a good contribution if you think that others in the community would want to know it. As a guide, note that MICCAI typically accepts around 30% of submissions.
  • A justification of your recommendation. What were the major factors in making your assessment? How did you weigh the strengths and weaknesses? Make sure that the reasons for your overall recommendation to accept or reject are clear to the program committee and the authors.
  • Ranking of this paper in your review stack: This information will be taken to calibrate the overall rating. Please try your best to avoid ties.
  • The expertise of the reviewer. If your expertise is limited to a particular aspect of the paper, this should be brought to the attention of the AC. The review is more likely to be taken seriously if the limitations of the reviewer's understanding are clearly acknowledged.

Please avoid:

  • Simply summarizing the paper and adding a couple of questions about low-level details in the paper.
  • Expressing an opinion without backing it up with specifics. For instance, if a method is novel, explain what aspect is novel and why this is interesting. If the method is not novel, explain why and provide a reference to prior work.
  • Being rude. A good review is polite. Just like in a conversation, being rude is typically ineffective if one wants to be heard.
  • Asking the authors to substantially expand their paper. The paper should be evaluated as submitted. The conference has no mechanism to ensure that any proposed changes would be carried out. Moreover, the authors are unlikely to have room to add any further derivations, plots, or text.

Your reviews will be published anonymously. However, please make sure that they are written in a way you would approve to appear under your name. Outstanding reviewers will be acknowledged and offered free registration to attend the MICCAI conference.

2. Specific Reviewing Notes

Historically, we have a very large number of papers in Medical Image Computing (MIC) but not as many in Computer-Assisted Interventions (CAI). Additionally, we now have a dedicated session for Clinical Translation. To ensure we select an appropriate spectrum of papers in all categories and sessions, please keep the following points in mind while reviewing.

General review considerations: When reviewing all MICCAI papers, consider

  1. whether the proposed methods are innovative or
  2. whether the application is innovative.

In particular the following questions should be asked:

  1. Is the topic of the paper clinically significant?
  2. Do the authors clearly explain data collection, processing, and division methods?
  3. Do the data accurately reflect the range and diversity of potential patients and disease manifestations?
  4. Are the data labels (if applicable) of sufficient quality to support the claimed performance or analysis of the algorithms?
  5. Do the authors report a sufficient number and type of performance measures to accurately represent the strengths and weaknesses of the algorithms? Are performance measures reported with measures of uncertainty or confidence (e.g., error bars, standard deviations, etc.)?
  6. Are the results and comparison with prior art placed in the context of a clinical application in terms of significance and contribution? Have the authors performed a proper statistical analysis of results (e.g. p-values)?
  7. Does the work make a substantial contribution to the field or society, or is it mostly incremental over previous work?
  8. Do the authors discuss limitations and other implications of their methods and directions for future research?

Specific considerations should be given to the following categories of submissions:

CAI-based papers: We encourage submissions of papers on the implementation of, and training for, Computer-Assisted Intervention approaches. In particular, we wish to highlight the use of Medical Image Computing techniques that have become integral components of Computer-Assisted Intervention. We encourage technologies, such as point-of-care imaging, that help make healthcare more accessible. Specific considerations for your review of CAI papers should include, but are not limited to:

  1. Presentation of a device or technology that has potential clinical significance.
  2. Demonstration of clinical feasibility, even on a single subject/animal/phantom.
  3. Demonstration of robust system integration and validation.
  4. Novel MIC approach to solving an unmet CAI need.
  5. Proposal of a cost-effective (frugal technology) approach to implementing an otherwise expensive CAI solution.
  6. Description of a system or device that is robustly validated against appropriate performance metrics.
  7. Human factors evaluation of CAI systems.

Clinical Translation papers: This session will emphasize the shift of MIC and CAI research from theory to practice by reflecting on the real-world challenges and potential impact of translating MIC and CAI methodologies into clinical workflows and evaluations. The philosophy of this dedicated session is to keep a high standard for methodology development while enabling a strong focus on the clinical application. Specific considerations for your review of Clinical Translation papers should include but are not limited to:

  1. Barriers and challenges in translation, and how to overcome these
  2. Robustness and reliability evaluation of algorithms
  3. Insights into the usability of MIC methods and CAI systems
  4. User interaction, adoption and acceptance
  5. Performance monitoring and clinical deployment

3. Formal Rules

Confidentiality: You have the responsibility to protect the confidentiality of the ideas represented in the papers you review. MICCAI submissions are by their very nature not published documents. The work is considered new or proprietary by the authors. Authors are allowed to submit a novel research manuscript that has been archived for future dissemination (e.g., on the arXiv or BioRxiv platforms). Sometimes the submitted material remains confidential to the authors' employers. Sending a paper to MICCAI for review does not constitute a public disclosure. Therefore, it is required that you strictly follow the following recommendations:

  • Do not show the paper to anyone else who is not directly involved in assessing the paper If you request the help of your colleagues and students, they will also be subject to the same confidentiality.
  • Do not show any results, videos/images or any of the supplementary material to non-reviewers.
  • Do not use ideas from a paper that you review to develop new ones of your own before its publication.
  • After the review process, destroy all copies of papers and supplementary material associated with the submission.
3.1 Policies on LLM Use in the Review Process

It is strictly prohibited to disclose any portion of a submitted paper (including text, figures, tables, experimental results, screenshots, or supplemental materials) to any large language model (LLM) or external AI system for the purpose of review. Uploading, copying, or describing a manuscript's content to an AI tool constitutes a direct violation of the anonymity and confidentiality standards that govern the peer-review process. A reviewer's responsibility is to personally assess the work, not to outsource evaluation to a third party.

Using external LLMs to analyze, summarize, critique, or rewrite any parts of the paper is treated as a breach of confidentiality equivalent to sharing the manuscript with an unauthorized individual. Because these models operate as data processors, inputting private content jeopardizes the anonymity of both authors and reviewers. If such violations are detected, program chairs and the MICCAI board may take serious action, potentially including revocation of reviewing and authorship privileges, reporting to institutional ethics boards, or additional consequences decided at the discretion of the conference leadership and the MICCAI board.

The use of LLMs is allowed only as a general-purpose writing assistance tool. You may use an LLM to polish the wording of your review (e.g., to correct grammar) once you have written it. Reviewers should understand that they take full responsibility for the contents of their reviews, including content generated by LLMs that could be construed as scientific misconduct or plainly false (e.g., incorrect summaries of the paper).

3.2. Conflict of Interest

The blind review process will help conceal the authors' identities. If you recognize the work or the author and feel it could present a conflict of interest, decline the review to the Area Chair and inform the Program Chairs. You have a conflict of interest if any of the following is true:

  • You belong to the same institution or have been at the same institution in the past three years,
  • You co-authored together in the past three years,
  • You held or applied for a grant together in the past three years,
  • You currently collaborate or plan to collaborate,
  • You have a business partnership,
  • You are relatives or have a close personal relationship.

4. Anonymization Rules

MICCAI 2026 follows a double-blind reviewing process, according to which anonymity should be preserved for both reviewers and submitting authors

Anonymity should be kept in mind during the paper submission, review, and rebuttal process.

Ensuring anonymity: Papers violating the guidelines for anonymity may be rejected without further consideration. At the same time, reviews that reveal the reviewer's identity are likely to have a lower impact on the PC's decision process. Please keep the following in mind during the review process:

  • Authors are asked to preserve their anonymity during the reviewing process, including not listing their names, affiliations, websites and omitting acknowledgments. All this information will be included in the camera-ready and published version
  • Please see the Submission Guidelines for additional details on how authors have been instructed to act to preserve their anonymity
  • Reviewers should also keep their identities hidden from the authors at all times.
  • Reviewers should not ask authors to cite their papers unless it is essential (e.g., the author is expanding on the reviewer's previous work or using the reviewer's dataset); this is unprofessional and also compromises the reviewer's anonymity.
  • If you accidentally discover the identity of the authors of a paper, make every effort to treat the paper fairly. It is NOT acceptable to accept or reject a paper based on the prior bias a reviewer might have about its authors.
  • Please report any potential breach of the anonymization rules in your assigned reviews.

ArXiv and preprint papers: With the increase in popularity of publishing technical reports, preprints on arXiv or other preprint hosts, the reviewer may accidentally uncover the authors of a paper.

  • Reviewers should not attempt to identify authors based on arXiv subordination or other publicly available technical reports. If the reviewer accidentally uncovers the authors' identity via arXiv, they should not allow this information to influence their review.
  • ArXiv papers are not considered prior work since they have not been peer-reviewed. Therefore, citations to these papers are not required and reviewers should not penalize a paper that fails to cite an arXiv submission.

Thank you, in advance, for your efforts and contributions toward yet another successful MICCAI Conference,
MICCAI 2026 Program Chairs