Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Xueyang Wu, Tao Zhong, Shujun Liang, Li Wang, Gang Li, Yu Zhang

Abstract

In neuroscience research, automatic segmentation of macaque brain tissues in magnetic resonance imaging (MRI) is crucial for understanding brain structure and function during development and evolution. Acquisition of multimodal information is a key enabler of accurate tissue segmentation, especially in early-developing macaques with extremely low contrast and dynamic myelination. However, many MRI scans of early-developing macaques are acquired only in a single modality. While various generative adversarial networks (GAN) have been developed to impute missing modality data, current solutions treat modality generation and image segmentation as two independent tasks, neglecting their inherent relationship and mutual benefits. To address these issues, this study proposes a novel Collaborative Segmentation-Generation Framework (CSGF) that enables joint missing modality generation and tissue segmentation of macaque brain MR images. Specifically, the CSGF consists of a modality generation module (MGM) and a tissue segmentation module (TSM) that are trained jointly by a cross-module feature sharing (CFS) and transferring generated modality. The training of the MGM under the supervision of the TSM enforces anatomical feature consistency, while the TSM learns multi-modality information related to anatomical structures from both real and synthetic multi-modality MR images. Experiments show that the CSGF outperforms the conventional independent-task mode on an early-developing macaque MRI dataset with 155 scans, achieving superior quality in both missing modality generation and tissue segmentation.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43901-8_45

SharedIt: https://rdcu.be/dnwDT

Link to the code repository

https://github.com/XueyangWWW/CSGF

Link to the dataset(s)

https://pubmed.ncbi.nlm.nih.gov/28210206/


Reviews

Review #2

  • Please describe the contribution of the paper

    (1) This study proposes a novel Collaborative Segmentation-Generation Framework (CSGF) that enables joint missing modality generation and tissue segmentation of macaque brain MR images. (2) The training of the MGM under the supervision of the TSM enforces anatomical feature consistency, while the TSM learns multi-modality information related to anatomical structures from both real and synthetic multi-modality MR images. (3) Experiments show that the CSGF outperforms the conventional independent-task mode on an early-developing macaque MRI dataset with 155 scans.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    (1) The writing of this paper is good and the structure is excellent. (2) The idea is pretty novel. They present a 3D Collaborative Segmentation-Generation Framework (CSGF) for early-developing macaque MR images. The CSGF is designed in the form of multi-task collaboration and feature sharing, which enables it to complete the missing modality generation and tissue segmentation simultaneously.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    (1) Ablation experiments about MGM loss and TSM loss can be added.

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    I think this paper can be reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    See the main weaknesses of the paper.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    See the main strengths and weaknesses of the paper.

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    6

  • [Post rebuttal] Please justify your decision

    The authors have answered my concerns. Consider the answers from the authors and the advice of other reviewers, I think this paper can be accepted.



Review #3

  • Please describe the contribution of the paper

    This paper proposes a framework (CSGF) that compounds conditional image generation and image segmentation, aiming to improve the segmentation of brain tissue for early-developing macaque brain MRI. CSGF consists of two modules: Modality Generation Module (MGM) and Tissue Segmentation Module (TSM). MGM is essentially a conditional GAN that synthesizes T2w conditioned on T1w. TSM is a regular UNet and takes real T1w and synthesized T2w as input. In addition to the intrinsic skip connection within UNet, authors employ skip connection between MGM encoder and TSM decoder, and term it cross-module feature sharing/collaborative learning (CFS).

    In the situation that only T1w MRI is available, CSGF can be used to improve segmentation accuracy. Improvement for CSF segmentation is decent but the improvements for GM and WM are only moderate.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    1. Combining conditional GAN and segmentation model training improves both segmentation and synthesis performance.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    1. Limited novelty and improvement: The novelty in this paper is limited as it was built upon two existing frameworks (PTNet and UNet) which are linked through simple feature concatenation. The authors propose CFS to improve segmentation performance by sharing the feature between GAN encoder and segmentation decoder. However, the improvement is not significant (less than 0.5% in dice). Combining segmentation and GAN training to improve segmentation has also been explored (e.g., [1]).
    2. Lack of comparisons with other SOTA segmentation baselines. The authors only compare with one baseline 3D UNet model, which is insufficient to justify that the proposed framework is superior in macaque brain tissue segmentation. Comparisons with other SOTA baselines, such as nn-UNet, Swin UNet, TransUNet, and SegFormer would help justify the contribution of this work.

    [1]: [1]: T. D. Bui, L. Wang, W. Lin, G. Li and D. Shen, “6-Month Infant Brain Mri Segmentation Guided by 24-Month Data Using Cycle-Consistent Adversarial Networks,” 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 2020, pp. 359-362, doi: 10.1109/ISBI45749.2020.9098515.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    The authors claim will release the code in the checklist. The dataset is public. The study is reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    1. Considering adding more comparisons with other segmentation frameworks will be helpful.
    2. There are other methods to connect the GAN and segmentation network other than concatenation. Fusing them using a transformer can be explored and may improve performance.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Given the limited novelty and improvement in the segmentation results, the contribution of this paper may be insufficient for MICCAI. In addition, the authors failed to compare with more segmentation baselines which limit the justification of their work. Therefore, I would recommend a rejection of this work.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A



Review #4

  • Please describe the contribution of the paper

    The study present a 3D Collaborative Segmentation-Generation Framework (CSGF), including multi-task collaboration and feature sharing, for early-developing macaque MR images. CSGF allows it to deal with the missing-modality brain MR images for tissue segmentation in early-developing macaquesis designed. Experiments show that the CSGF achieves superior quality in both missing modality generation and tissue segmentation.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
    • Author presents a collaborative-task mode combines image generation task and segmentation task through feature sharing and supervised learning. The idea of integrate tissue segmentation and modality generation into a unified framework is interesting and effective.
    • The CSGF outperfoms all compared method on both segmentation and generation tasks.
    • The paper is well-organized.
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
    • The method of feature sharing is too simple, and I think the proper fusion way can be discussed more in your future works.
    • The presentation of the results could be improved. More information could be found in Q9.
  • Please rate the clarity and organization of this paper

    Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    the description of the method part is clear and the used dataset is public.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
    • In MGM, What precisely is the GAN loss? Dose it means a discriminator is involoved in the training stage?
    • There are some multi-task learning that generate new image and segmentation simultaneously. What do you think is the advantage of your method compared with those MTL methods? And, comparison with these MTL based method is necessary.
    • You visualized the results of the segmentation task, so how about the visualization results of the generation task?
    • You divide images into two groups ‘Infancy’ and ‘Yearlings and juveniles’ according to the developmental stage of macaques. However, large changes occur in first 0~12 months, you can show more results in Group ‘Infancy’ to illustrate the effectiveness of the proposed model. For example, you can list segmentation results of ‘0~4 months’, ‘4~8 months’ and ‘8~12 months’ seperately to show the performance.
  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    5

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    research area and the idea of collaborative-task training

  • Reviewer confidence

    Confident but not absolutely certain

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    This paper proposed a 3D Collaborative Segmentation-Generation Framework for macaque MR images. The clarity is good. There are some concerns from reviewers, such as the novelty against previous works, SOTA comparison results.




Author Feedback

We thank all reviewers for their valuable comments. We are inspired that the key contributions of our work are affirmed by reviewers: 1) ‘good clarity’ (all reviewers); 2) ‘pretty novel idea and enables to complete the missing modality generation and tissue segmentation simultaneously’ (R2); 3) ‘combining conditional GAN and segmentation model training improves both segmentation and synthesis performance’ (R3); 4) ‘interesting and effective idea’ (R4).

Including information about the ablation experiments related to the MGM loss and TSM loss (R2). We would like to clarify that we have indeed conducted ablation experiments on different module combinations, which can be considered as ablation experiments for these two losses. In Table 1, the comparison between CSGF (“CSGF w\o CFS”) and a single U-Net (“U-Net”) highlights the effectiveness of the MGM loss in facilitating the segmentation task. In Table 2, the comparison between CSGF (“CSGF w\o CFS”) and a single PTNet (“PTNet”) showcases the effectiveness of the TSM loss in facilitating the generation task. However, we acknowledge the need for more comprehensive ablation research on loss functions, such as exploring focal loss or weighted loss as alternatives for the TSM loss, which will be a crucial part of our future work.

Novelty, improvement, and advantage of this research in comparison to other works that combine generation and segmentation (R3 and R4). Firstly, it is imperative to clarify that the reference [1] mentioned by R3 employs the pre-trained weights of the segmentation network to extract segmentation features, fortifying the generation network. However, it does not concurrently update the segmentation and generation networks by propagating gradients. A significant distinction between our work and prior similar studies (e.g., [1]) lies in the fact that the proposed CSGF operates in a collaborative-task model rather than an independent-task mode conventionally employed by previous approaches. Within our framework, the generation component is trained under the supervision of the subsequent segmentation part, enabling the generation of missing modalities that are more conducive to segmentation results. Table 1 illustrates that this collaborative-task mode (“CSGF w\o CFS”) leads to over 2% Dice improvement, compared to the conventional independent-task mode (“PTNet+U-Net(ind.)”). Furthermore, the incorporation of the CSF module (“CSGF with CFS”) yields additional improvement. [1]: TD. Bui, et al., “6-Month Infant Brain MRI Segmentation …”.

Lack of comparison with other SOTA (R3). The focus of this study is to prove the effectiveness of the proposed collaborative-task mode compared to the conventional independent-task mode. In fact, any generation structure and segmentation structure can be easily adopted as the backbone of our CSGF framework, e.g., nn-UNet, Swim UNet and TransUNet mentioned by R3. Limited by the paper length, this study only selected the most widely-used U-Net as the backbone. In future expansion of this work, we will combine the advantages and distinctive features of these SOTA methods for single tasks to enhance our CSGF and provide more comprehensive comparisons.

Several minor concerns (R4). Firstly, the generation part of the proposed CSGF is a GAN, which entails the involvement of a discriminator during training. We apologize for the vague descriptions, and we will include a concise explanation in the final version. Secondly, it is important to emphasize that a focus of this work is the improvement of the downstream task, i.e., tissue segmentation. Due to the constraint on paper length, we didn’t provide visualization results of the generation task. However, we presented quantitative results in Table 2, which demonstrate the improved performance of generation through segmentation supervision. Lastly, we appreciate the constructive comment regarding the subdivision of the Infancy stage. We will provide these results in the final version.




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors provided a segmentaton framework for early-developing macaque MR images. Most concerns were addressed in the rebuttal.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The rebuttal has generally addressed the major concerns raised by reviewers including the novelty against previous work (pretrained vs. concurrently updating) and SOTA comparison. The authors also mentioned ablation studies have been included in the paper. The paper’s idea has its novelty and merit to be accepted by MICCAI.



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The paper presents 3D Collaborative Segmentation-Generation Framework for macaque MR images. The two main concerns raised by the reviewers refered to the limited novelty of the work (built on PTNet and UNet) and lack of comparisons with other SOTA segmentation baselines. The authors aimed to address these concerns however, there claims were not entirely convincing. The authors are encouraged to extend their work based on their plans in the rebuttal.



back to top