Paper Info Reviews Meta-review Author Feedback Post-Rebuttal Meta-reviews

Authors

Yu Liu, Gesine Müller, Nassir Navab, Carsten Marr, Jan Huisken, Tingying Peng

Abstract

Light-sheet fluorescence microscopy (LSFM), a planar illumination technique that enables high-resolution imaging of samples with minimal photo-damage, experiences “defocused” image quality caused by light scattering when photons propagate through thick tissues. To circumvent this issue, dual-view imaging is particularly helpful. It allows various sections of the specimen to be scanned ideally by viewing the sample from opposing orientations. Recent image fusion approaches can then be applied to determine in-focus pixels by comparing image qualities of two views locally and thus yield spatially inconsistent focus measures due to their limited field-of-view. Here, we propose BigFUSE, a global context-aware image fuser that stabilizes image fusion in LSFM by considering the global impact of photon propagation in the specimen while determining focus-defocus based on local image qualities. Inspired by the distinctive image formation prior in dual-view LSFM, image fusion is considered as estimating a focus-defocus boundary using Bayes’ Theorem, where (i) the adverse effect of light scattering onto focus measures is included within Likelihood; and (ii) the spatial consistency regarding focus-defocus is imposed in Prior. The expectation-maximum algorithm, aided by a reliable initialization, is then adopted to estimate the focus-defocus boundary. Competitive experimental results show that BigFUSE is the first dual-view LSFM fuser that is able to exclude structured artifacts when fusing information, highlighting its abilities of automatic image fusion.

Link to paper

DOI: https://doi.org/10.1007/978-3-031-43993-3_62

SharedIt: https://rdcu.be/dnwN7

Link to the code repository

N/A

Link to the dataset(s)

N/A


Reviews

Review #1

  • Please describe the contribution of the paper

    This paper introduces a image fusion method for dual-view LSFM images by finding the focus-defocus boundary. This method solves a fundamental issue in LSFM imaging and I would expect great impact in this field.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The uniquess of the proposed image fusion method is that it does not attempt to fuse the information from multiple images at every pixel, which inevitably may combine the information from different images more or less. Considering the special property of LSFM imaging, for each pixel there will be one version of the multiple views which has the best quality. Then, the problem becomes selecting the “good” version at each pixel instead of fusing mutliple versions. Again, the method takes the imaging technique into account and further turn the selection problem into a separation boundary finding problem. One view is always better on one side of the boundary, while the other view is always better on the other side of the bouddary. To solve the boundary finding problem, the authors formulate it as a Bayer problem and develop a good optimization solution.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The authors did not mention about code release in the main text. Without releasing the code, the impact of this paper could be much degraded.

  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    I cannot judge on the part of code due to lack of information.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    I would highly recommend public code release for reproducibilty purpose and also for making larger impact for the community.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    7

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    This paper solves a fundamental problem in the LSFM field. The proposed method is legendary. It requires good knowledge of the imaging system, good knowledge of optics, and expertise in computer vision and statistical methods.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    8

  • [Post rebuttal] Please justify your decision

    I think the author well address the concerns from all reviewers! Nice work!



Review #2

  • Please describe the contribution of the paper

    This paper proposes a method for fusing dual view lightsheet microscopy images efficiently by taking into consideration the properties of image quality based on direction of imaging. The authors propose a cost function that includes a prior of image quality and smoothness of the focus-defocus boundary that would be caused due to the imaging propoerties and optimize this with an expectation maximization framework.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    main strengths include:

    • novel formulation of the problem that can have a great impact on light sheet image processing, especially automated segmentation of small structures
    • ideas communicated pretty clearly
  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    main weaknesses:

    • not tested on many enough datasets
    • quantitative measurement is challenging
  • Please rate the clarity and organization of this paper

    Excellent

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    This paper is clearly written and is reproducible.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    This paper proposes an interesting way of fusing images from dual view light sheet microscopy. Some of the improvements that could be made are:

    • In table 1, although BigFuse’s EMSE value is bolded, it is not the best one. Some discussion on this would be necessary.

    • In figure 2 it is quite difficult to see differences in results of the different methods even in teh zoomed in boxes. The boundary differences are the only easily observable differences. Perhaps just one grouth trouth image with the box showing where the region comes from and then zoomed in versions of results can be more helpful.

    • A discussion of differences in results would also be helpful in teh caption of figure 2.

    • Same with Figure 3 - showing just the zoomed in boxes can be more helpful for hte reader.

    • It would be good to get a bit more information about Figure 4 - how many images was this calculated on? Is this an average?

    • In order to truly verify the impact of this method, I feel an impact on an appilcation (like segmentation or counting of objects) needs to be shown.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    4

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    I think this paper can have great impact on the work in MICCAI. However, I feel that the experimental section needs to be stronger - either validation on many more datasets with similar metrics, or an impact in the form of segmentation results being different. I think the second is actually quite easy to show given great tools like cellpose that are now available to benchmark with so it would feel like a more complete paper with this type of information included.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    7

  • [Post rebuttal] Please justify your decision

    The authors have addressed the concerns that I had.



Review #3

  • Please describe the contribution of the paper

    The paper proposes an automatic image fusion method to combine information from dual-view light-sheet fluorescence microscopy (LSFM) images. It is posed as estimating a focus-defocus boundary using Bayes’ Theorem. The global impact of photon propagation in the specimen is considered while focus-defocus is determined based on local image qualities.

  • Please list the main strengths of the paper; you should write about a novel formulation, an original way to use data, demonstration of clinical feasibility, a novel application, a particularly strong evaluation, or anything else that is a strong aspect of this work. Please provide details, for instance, if a method is novel, explain what aspect is novel and why this is interesting.

    The method formulation is novel in its application for dual-view LSFM image fusion, though Bayesian fusion has been used in image fusion task in literature (e.g. Zhao et. al. 2020, “Bayesian fusion for infrared and visible images”). The proposed method maximizes the image clarity likelihood given two image views and their image formation prior of opposing illumination directions. The results don’t show ghost artifacts.

  • Please list the main weaknesses of the paper. Please provide details, for instance, if you think a method is not novel, explain why and provide a reference to prior work.

    The methods used for comparison are from at least 6 years ago: DWT (2012), NSCT (2009), dSIFT (2015), BF (2017). There are many newer developments is this field. See Azam et. al., “A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics” (2022) and Liu et. al, “Multi-focus image fusion: A Survey of the state of the art” (2020).

  • Please rate the clarity and organization of this paper

    Very Good

  • Please comment on the reproducibility of the paper. Note, that authors have filled out a reproducibility checklist upon submission. Please be aware that authors are not required to meet all criteria on the checklist - for instance, providing code and data is a plus, but not a requirement for acceptance

    Some parameters used in preparing the data could be specified. Runtime was not reported.

  • Please provide detailed and constructive comments for the authors. Please also refer to our Reviewer’s guide on what makes a good review: https://conferences.miccai.org/2023/en/REVIEWER-GUIDELINES.html

    The authors proposed a dual-view LSFM image fusor in a Bayesian framework. A synthetic blur image and a real blur image were used. It would be more clear if the authors could explain a little bit what the three fusion quality metrics represent rather than just giving the symbols and references. Some more images would be desired for performance quantification in a table of averaged metrics (more suitable than the column charts). It would also be good to specify the parameters used for Gaussian filter and to provide a runtime comparison.

  • Rate the paper on a scale of 1-8, 8 being the strongest (8-5: accept; 4-1: reject). Spreading the score helps create a distribution for decision-making

    6

  • Please justify your recommendation. What were the major factors that led you to your overall score for this paper?

    Overall, the paper is well-structured and validated.

  • Reviewer confidence

    Very confident

  • [Post rebuttal] After reading the author’s rebuttal, state your overall opinion of the paper if it has been changed

    N/A

  • [Post rebuttal] Please justify your decision

    N/A




Primary Meta-Review

  • Please provide your assessment of this work, taking into account all reviews. Summarize the key strengths and weaknesses of the paper and justify your recommendation. In case you deviate from the reviewers’ recommendations, explain in detail the reasons why. In case of an invitation for rebuttal, clarify which points are important to address in the rebuttal.

    The reviewers were very enthusiastic about this paper, citing its excellent technical framework, validation and its potential impact on lightsheet microscopy. The are some minor criticisms that the authors should address that would only help improve the usefulness of this method to the field: 1) code should be released, 2) demonstrate improvements in some downstream task 3) cite some more recent literature as R3 has noted. Upon addressing these, this paper would be an excellent addition to MICCAI.




Author Feedback

Dear Area Chair and Reviewers, We thank you and the reviewers for their constructive feedback on our manuscript and for giving us the opportunity to clarify the points raised. Our method, a dual-view light-sheet fluorescence microscopy with image formation prior, called BigFUSE, is “a novel formulation of the problem that can have a great impact on light sheet image processing” (R2). All reviewers acknowledge the novelty of our proposed method and find that “the proposed method is legendary. It requires good knowledge of the imaging system, good knowledge of optics, and expertise in computer vision and statistical methods” (R1). They further agree that “This paper solves a fundamental problem in the LSFM field” (R1), and “is clearly written and is reproducible”. Yet they also have concerns about (i) the code release (R1), (ii) additional experiments to demonstrate performance improvements on downstream tasks (R2), and (iii) insufficient performance quantification, including citing more recent literature for comparison. i) As stated in the conclusion section of our manuscript, BigFUSE code will be made accessible for academic usage. We will make BigFUSE code open-source on GitHub, and are very happy to see BigFUSE to “make larger impact for the community” (R1). ii) To address the concerns of improvements for downstream tasks raised by R2, we include the segmentation results among different baseline methods realized via Cellpose in the updated manuscript: “additionally, we demonstrate the impact of bigFUSE on a specific downstream task, i.e., segmentation by Cellpose. Only the fusion result provided by bigFUSE allows reasonable segmentation, given that ghosting artifacts are excluded dramatically”. iii) With respect to concerns in performance quantification, we cite several review papers, as suggested by R3, including “Azam et al. Computers in Biology and Medicine (2022)”, “Ma et al. Neural Computing and Applications (2020)”, and “Liu et al. Information Fusion (2020)”. Yet most of the methods in these review papers are about general natural/medical fusion which does not completely fit to our light-sheet dual view fusion problem, e.g. none of these methods can address the ghost problem. Moreover, we check the detailed comments and update the paper accordingly: iii. a) a runtime experiment is included in the updated manuscript (R3): “the whole BigFUSE pipeline takes roughly nine minutes to process a zebrafish embryo (28220482048 pixels for each view), using a T4 GPU with 25 GB system RAM and 15 GB GPU RAM.”. iii. b) to emphasize “the differences among the results” (R2), Figs. 2 and 3 are reorganized with “just one ground truth image with the box showing where the region comes from and zoomed in versions of results.” (R2). In addition, the caption of Fig. 2 is rewritten to “discuss differences in results” (R2). iii. c) we include a “detailed explanation of the fusion quality metrics” (R3) in the updated manuscript, and also the “calculation of fusion quality metrics” (R2) as “metrics in Fig. 4 are calculated on 282 slices of 2048*2048 images”.

Furthermore, we will also follow the suggestions and evaluate BigFUSE against more SOTA methods in the journal extension of the manuscript. Finally, we thank the reviewers again for the revision and every invaluable suggestion given. We hope that we can clarify the concerns of the reviewers and provide a revised manuscript that “can have great impact on the work in MICCAI” (R2), and “solves a fundamental issue in LSFM imaging” (R1).




Post-rebuttal Meta-Reviews

Meta-review # 1 (Primary)

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    The authors have adressed all of reviwers’ concerns, elevating their ratings to a unanimous accept. I recommend a student award and an oral presentation for this paper.



Meta-review #2

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    strengths: the problem formulation is reasonably new; weaknesses: code yet to be made publicly available; yet to demonstrate convincingly on downstream applications;
    how the rebuttal informed your decision: the authors rebuttal states that the code will be release; they will demo on the downstream tasks such as segmentation by Cellpos;



Meta-review #3

  • Please provide your assessment of the paper taking all information into account, including rebuttal. Highlight the key strengths and weaknesses of the paper, clarify how you reconciled contrasting review comments and scores, indicate if concerns were successfully addressed in the rebuttal, and provide a clear justification of your decision. If you disagree with some of the (meta)reviewer statements, you can indicate so in your meta-review. Please make sure that the authors, program chairs, and the public can understand the reason for your decision.

    This manuscript presents a novel image fusion method for light-sheet fluorescence microscopy images. It models the image fusion from a Bayesian perspective (claims to be the first effort), and outperforms some other related approaches in the experiments. The rebuttal has addressed most of the reviewers’ concerns, such as publication of source codes, performance improvements for downstream tasks (e.g., image segmentation) and presentation improvement (e.g., Fig. 2 and Fig. 3). All the reviewers suggested acceptance of the manuscript.



back to top