List of Papers By topics Author List
Paper Info | Reviews | Meta-review | Author Feedback | Post-Rebuttal Meta-reviews |
Authors
Brian Teixeira, Vivek Singh, Birgi Tamersoy, Andreas Prokein, Ankur Kapoor
Abstract
Despite recent developments in CT planning that enabled automation in patient positioning, time-consuming scout scans are still needed to compute dose profile and ensure the patient is properly positioned. In this paper, we present a novel method which eliminates the need for scout scans in CT lung cancer screening by estimating patient scan range, isocenter, and Water Equivalent Diameter (WED) from 3D camera images. We achieve this task by training an implicit generative model on over 60,000 CT scans and introduce a novel approach for updating the prediction using real-time scan data. We demonstrate the effectiveness of our method on a testing set of 110 pairs of depth data and CT scan, resulting in an average error of 5mm in estimating the isocenter, 13mm in determining the scan range, 10mm and 16mm in estimating the AP and lateral WED respectively. The relative WED error of our method is 4%, which is well within the International Electrotechnical Commission (IEC) acceptance criteria of 10%.
Link to paper
DOI: https://doi.org/10.1007/978-3-031-43990-2_40
SharedIt: https://rdcu.be/dnwLV
Link to the code repository
N/A
Link to the dataset(s)
N/A
Reviews
Review #1
- Please describe the contribution of the paper
This paper introduces a technique for estimating the water equivalent diameter, patient isocenter, and scanning start point from a 3D camera for non-contrast CT scans. The purpose is to improve throughput and to save a small amount of dose.The basic idea is to train a generative model of the WED from CT scans, which results in a WED manifold. Next, an encoder is trained to map the CT image to the WED for a specific patient. Real-time scan information can be used to refine the prediction.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
The work is highly novel and impactful. This is a good example of translational medicine. Rather than introducing yet another image processing method or deep learning network tweak, this paper aims to improve lung cancer screwing programs by making the scan acquisition process easier using cheap off the shelf hardware
Results consider international standards and achieve errors that meet the criteria set forth by those standards. In particular, the authors show that the mean deltaRel error was 0.43 for both lateral and AP WED measurements, which is about half the error required by IEC standards.
The paper is well written and easy to understand. It is easy to understand, for anyone who is involved in lung cancer screening, why this could be useful at large imaging centers that do a lot of scans per day. The flow of the paper makes sense, as it describes what the technique does, why it is important, and how it achieves this in a thorough and concise manner.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
There is a lack of detail on how the networks were trained. It isn’t clear if the training would be reproducible given access to the data but not code. It is understandable that deep learning method are fairly well established by now and thus not too much detail is required. However, there should be enough to get an idea about how to potentially replicate the study.
The main weakness is that it isn’t clear is the real-time adjustment is possible to achieve within a normal clinical workflow. I would assume that the scan occurs too quickly for the method to be aborted and the correction to be performed in a timely manner. The initial aborted scan then because like a high dose scout scan, so why not just do a scout scan? The authors should report how often the real-time adjustment is expected to be needed. It seems as if the adjustment using a 20 mm window was tested for all validation data, and that was the way that they were about to achieve IEC criteria. If that is true, then why not use a “gapped” scan, where slices are taken throughout the thorax at 15 mm gaps, to get the data needed for refinement. Would this actually be a workflow improvement?
What 3D camera was used for this ?
- Please rate the clarity and organization of this paper
Excellent
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
There isn’t really much detail at all on the hardware (3D camera) or network hyper parameters and deep learning libraries used. So reproducibility is poor for this work.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
This is a great paper, but there needs to be a more honest discussion about the need for real-time refinement: how often, and how disruptive to the workflow would it be? There also needs to be more details on overall implementation to make this work more reproducible.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
7
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
I think it is a very interesting paper with a high likelihood of making a real clinical impact. There are some obvious weaknesses, but I think the initial concept is promising and worthy of presentation at a conference.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #2
- Please describe the contribution of the paper
This paper proposes a novel method for automating CT lung cancer screening workflow using 3D camera images, which eliminates the need for time-consuming scout scans. The proposed method estimates scan range, isocenter, and Water Equivalent Diameter (WED) by training an implicit generative model on 60,000 CT scans and introducing real-time data updating. Testing on 110 pairs of depth data and CT scans showed an average error of 5mm in isocenter estimation, 13mm in scan range, and 10mm and 16mm in estimating AP and lateral WED, respectively. The method’s relative WED error meets IEC acceptance criteria. Weaknesses include limited details on the model architecture and small testing set. Constructive comments include providing more information and testing on larger datasets.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
- Use of an implicit generative model trained on a large dataset of over 60,000 CT scans
- Novel approach for updating the prediction using real-time scan data
- Demonstration of the effectiveness of the proposed method on a testing set of 110 pairs of depth data and CT scans
- Average error of 5mm in estimating the isocenter, 13mm in determining the scan range, 10mm and 16mm in estimating the AP and lateral WED respectively
- Relative WED error of the proposed method is well within the IEC acceptance criteria of 10%
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
Lack of comparison: The paper does not provide a comparison with other deep learning models (other than a combination of AutoEncoder and DenseNet) for estimating WED from depth images.
Limited details on the model and training process: The paper does not provide detailed information on the architecture of the model and the training process, which may make it difficult for readers to fully understand the proposed method.
- Please rate the clarity and organization of this paper
Very Good
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
It requires large dataset (<60,000 CTs and 2742 pairs of depth and CT images) to train, but it is not easy to collect. Providing more information on the architecture of the model and the training process could increase reproducbility.
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
The method of using an AutoDecoder to learn the WED latent space is well-described. However, providing more details on the specific architecture of the AutoDecoder, such as the number of layers, activation functions, and optimizer used, could be beneficial for the reader. Additionally, including more information on the training process, such as number of epochs, could help readers better understand the proposed method.
While the proposed method demonstrates promising results on a testing set, further improvements in model architecture and testing on larger datasets are necessary to establish the effectiveness and robustness of the proposed method. This could include exploring alternative deep learning architectures, as well as testing on a larger and more diverse set of patients to evaluate the generalizability of the method.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
5
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
There are also some strengths, such as the innovative approach of using a semi-supervised autodecoder to learn the WED latent space and the potential clinical applications of the proposed method, while the paper has some weaknesses, such as the lack of comparison with other deep learning models, the limited dataset, and the limited generalizability of the proposed method. Overall, the strengths of the paper are sufficient to warrant acceptance with some revisions and improvements.
- Reviewer confidence
Very confident
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
N/A
- [Post rebuttal] Please justify your decision
N/A
Review #3
- Please describe the contribution of the paper
The authors proposed a deep learning-based workflow to eliminate the need for scout scans in CT lung cancer screening. In this work, the depth images captured by the 3D camera were used as the input to predict the Water Equivalent Diameter, further, to benefit the automatic CT screening.
- Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.
• This work interestingly proposed a solution using depth images to benefit the CT acquisition. This setup will likely inspire other research topics in CT acquisition.
- Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.
• The clinical significance of this work is very limited. Scout scan is widely accepted and valuable technique for lung cancer screening. Even though the scout scan may take several minutes to localize the patient and cause some radiation exposure (which is actually a very small amount of radiation dose compared to diagnostic CT), the most valuable part of scout scan is to acquire CT images with constant image quality that would avoid missing lung cancer detection. While the authors’ method proposed an interesting solution to automate the CT image acquisition, the reported approach performance is skeptical in achieving the same performance as scout scan. Therefore, cancer screening may be affected, which would have a huge clinical consequence. • The method description is not clear. Several different networks were included in the workflow, but this paper didn’t clearly present them, for example how the generative model was used in this workflow and how it connects with other steps. And methods of “starting point” and “isocenter calculation” should be presented in the Method section instead of the Results section.
- Please rate the clarity and organization of this paper
Satisfactory
- Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance
• This work is difficult to reproduce as many key details are not included. For example, the paper does not describe how to set up the 3D camera and how to acquire the depth image. Additionally, the chosen network description is not well-defined. • The dataset and code are not available
- Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html
Other than the points mentioned in “Weaknessnes”. Here are some other comments: • The system setup is not mentioned in this work. The authors need to include how to acquire the depth image? And whether the depth image acquired from different locations will impact the performance. • The dataset needs to be clarified. Specifically, in “Isocenter” part, how does this paper obtain the technicians’ estimates? • Several networks were included in this workflow. The paper needs to provide an overview (may be a picture) that shows the chosen networks, corresponding inputs and outputs and how to connect the networks.
- Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making
3
- Please justify your recommendation. What were the major factors that led you to your overall score for this paper?
• The clinical significance of this work is very limited.
- Reviewer confidence
Confident but not absolutely certain
- [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed
4
- [Post rebuttal] Please justify your decision
For the response regarding the low clinical significance, the reviewer did not question the significance of low-dose CT screening. The reviewer fully agrees that higher throughput will be highly useful. Further demonstration of the time-saving and clinical validity of avoiding the scout scan is still needed. (This is the main concern) Regarding other major comments, the reviewer is satisfied with the answers provided. Therefore, a “weak reject” is given this time.
Primary Meta-Review
- Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.
This paper proposes a novel method for automating CT lung cancer screening workflow using 3D camera images, which eliminates the need for time-consuming scout scans. The approach is clearly novel in terms of clinical application. Some of the strengths include the use of a very large number of CT scans (60,000 CTs) used for training the model and promising results are shown on the testing set. However, the method lacks in details to allow sufficient reproduction of the method as well as lack of comparison to other methods to understand the accuracy gains. Also, given such a large dataset, the rationale for using a very small testing set should be clarified. Also, one of the reviewers raised concern regarding clinical utility of the approach as scout scans are essential to avoid missing lesion detection. Authors should clarify what is the clinical utility of their approach.
Author Feedback
We thank the reviewers for their valuable feedback and recognizing the novelty and potential impact of this work. Below we have addressed the concerns raised by reviewers about reproducibility and the clinical significance.
[R1, R2, R3: Details on model architecture] Our WED model, described in Sec 2, consists of two networks: an encoder network that maps the depth image to a latent vector, and a generative model, that maps the latent vector to the WED profiles. The encoder network is a DenseNet with 3 dense blocks of 4 layers each, using spectral normalization and ReLU, and the generative model is an AutoDecoder with 8 layers using layer normalization and ReLU activations. We additionally trained a DenseUNet, with 4 encoding blocks and 4 decoding blocks with 4 layers each, using batch normalization and ReLU, for estimating start position and a DenseNet, with 3 blocks of 4 layers each, using batch normalization and ReLU for estimating isocenter. We will add these details in the revised manuscript.
[R2: Limited size of the testing dataset] While we use 62K CT scans, we only have 2.8K paired CT scans and corresponding 3D camera images as they are difficult to acquire. Furthermore, to ensure no patient overlap between training and testing sets, we use all the data (110 patients) collected from a separate site in Europe as the testing set.
[R1, R3: Clarity on 3D camera setup] In our experiments, we used a ceiling-mounted Kinect 2 camera like existing 3D camera-based positioning solutions (e.g., GE, Siemens, United) as described in [12]. We will include this information and the reference in the final manuscript.
[R1’s concerns about the clinical viability of our real-time refinement approach] We understand this concern and will update the manuscript with a clearer description. We would like to clarify that the refinement does not affect the table movement. The table still progresses smoothly inside the gantry, as it does in the current workflow. The real-time refinement module updates the forecasted WED profiles based on the CT signal up until that time. We understand that while the updated WED profiles are computed and applied, the table may have moved slightly further and thus, we can’t apply the updates too frequently. To this end, we evaluated our method with updates applied at 20- and 50-mm intervals and show the performance meets at IEC requirements despite the limitations. We would like to additionally report that we stop the scan if the deviation is too large, and we observed that this only occurs for 2 patients in the testing set.
[R3’s concern about low clinical significance] We believe the clinical significance of this work lies in enabling a higher throughput by skipping the topogram for majority of the cases. The need for high throughput and increased accessibility for screening has also been emphasized by CDC’s 2017 study [Geographic Availability of Low-Dose Computed Tomography for Lung Cancer Screening in the United States].
[R3’s concern on clinical validity] We understand that existing tube current modulation (TCM) modules that use topogram attempts to maintain constant image noise. However, we would like to add that even using the predicted WED profiles (AP, lateral), TCM module would attempt the same. Our results meeting the IEC standard requirements further suggests the clinical validity of our approach. However, we do recognize that our approach may not work on patients with metal implants or resected organs. We will update the manuscript to list these limitations.
Post-rebuttal Meta-Reviews
Meta-review # 1 (Primary)
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The authors responded to the major concerns of the reviewers. Assuming the authors will add the details of the method and explain the rationale for using such a small testing set despite having a large dataset, the paper is recommended for acceptance.
Meta-review #2
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The paper is addressing an interesting question and evaluating on a large-scale dataset (60,000 CT scans). However, the method may not be clearly described and the experiments part could be further strengthened.
Meta-review #3
- Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.
The authors have addressed most of the comments raised by the reviewers and original AC about clinical utility of the method as well as clarifying some design decisions in their experimental setup. I encourage the authors to fully revise their paper to consider those comments.