Speaker
Description
Characterisation of the internal 3-dimensional (3D) structure of complex porous materials has been revolutionised with deep-learned image processing and segmentation, promising second-scale scan times with hour-scale quality, and beyond-human multi-label segmentation accuracy at a fraction of the time. However, these claims are currently only true for single-sample, single-domain cases using 2D networks on 3D data, or small 3D subdomains (<$10^{8}$ voxels) on 3D networks. These limitations are fundamental to domain mismatch between trained networks and inference inputs, dimensional blindness of 2D networks on 3D data causing z-axis misalignment (the coin-stack (CS) effect), and the incompatibility between memory inefficient 3D networks and large-scale 3D data. These interconnected issues that have prevented the true application of deep learning to 3D volume data ($10^{11}$ voxels, typical of synchrotron and nano/micro-CT imaging) are resolved in this paper. Herein, we introduce an unpaired semantically consistent pseudo-3D approach to domain transfer capable of inference on domains approaching the tera-scale. Several important domain transfer applications are exhibited and validated using pixel metrics and physical parameters, including the enhancement of the time resolution from hour-scale to minute- and second-scale of static and dynamic scans of geological rocks while maintaining the hour-scale image quality, accurate segmentation of out-of-domain nano/micro-CT images using a pretrained segmentation models of lithium-ion batteries and hydrogen fuel cells, and efficient large-scale 3D inference ($10^{11}$ voxels) on single GPU.
Country | Australia |
---|---|
Student Awards | I would like to submit this presentation into both awards |
Acceptance of the Terms & Conditions | Click here to agree |