Principle of MASI and computational synchronization
Figure 1 illustrates the operating principle and implementation of MASI. In Fig. 1a, we demonstrate how MASI surpasses the diffraction limit of a single sensor by coherently fusing wavefields in real space, without requiring reference waves or overlapping measurement regions between receivers. The process begins with capturing raw intensity patterns using an array of coded sensors positioned at different diffraction planes. Each sensor incorporates a pre-calibrated coded surface that enables robust recovery of complex wavefield information via ptychographic reconstruction. After recovering the wavefields, we computationally pad them and propagate them to the object plane in real space for alignment and synthesis.
a Resolution enhancement with MASI. Lensless diffraction patterns of a point source are captured by nine coded sensors (first column). These images are processed to recover the complex wavefields (second column), which are then padded and propagated to the object plane (third column). Through computational phase synchronization (fourth column), MASI synchronizes wavefields from different sensors by optimizing their relative phase offsets to maximize energy in the reconstructed object, without requiring any overlapping measurement regions between individual sensors. In the rightmost panel, the nine color blocks in the bottom right inset represent the recovered phase offsets of individual sensors, where the phase values are coded with color hues. b Field of view expansion with MASI. As the padded wavefields are propagated from the coded surface plane to the object plane, diffraction naturally expands the field of view beyond individual sensor dimensions, enabling reconstruction despite physical gaps between sensors. c MASI prototype with a compact array of coded sensors. The insets show the coded image sensor and its integration with a customized ribbon flexible cable. d Reflection-mode configuration, where a laser beam illuminates the object surface at ~45 degrees.
An important step enabling MASI’s performance is to properly synchronize the wavefields from individual sensors that operate independently without overlapping measurement regions. As shown in the fourth column panels of Fig. 1a, we address this challenge by implementing computational phase synchronization between individual sensors. We designate one of the sensors as a reference and computationally determine the phase offsets for all remaining sensors through an iterative optimization process (Methods). The color blocks in the fourth column panels of Fig. 1a represent these recovered phase offsets, with different color hues indicating different phase values. By maximizing the energy concentration in the reconstructed image, this approach ensures constructive interference between all sensor contributions despite their physical separation, eliminating the need for overlapping measurements or reference waves that constrain conventional techniques. With proper synchronization, the real-space coherent fusion in MASI significantly improves resolution compared to what is achievable with a single sensor in the rightmost panel of Fig. 1a (also refer to Supplementary Figs. S1-S2 and Note 1). In contrast with conventional approaches that perform aperture synthesizing at reciprocal space, MASI operates entirely through real-space alignment, synchronization, and coherent fusion, effectively transforming a distributed array of independent small-aperture sensors into a single large virtual aperture.
An enabling factor for MASI’s multiscale strategy is the ability to accurately recover wavefield information using individual sensors. As demonstrated in Supplementary Fig. S3, conventional phase retrieval methods like Fourier ptychography suffer from non-uniform phase transfer function36 that drops to near-zero values for low spatial frequencies, making them blind to slowly varying phase components, such as linear phase ramps and step transitions18. In contrast, MASI’s individual coded sensor successfully recovers both the step phase transition and the linear phase gradient with only a constant offset from ground truth. This robust performance stems from the coded surface modulation, which converts phase variations—including low-frequency aberrations—into detectable intensity variations21,43. For example, a linear phase ramp is converted into a spatial shift of the modulated pattern, while other slowly varying phase variations manifest as distortions in the modulated pattern. Supplementary Fig. S4 illustrates how Fourier ptychography fails when attempting synthetic aperture imaging of complex objects with multiple phase wraps. This demonstrates that without proper phase recovery capabilities at the individual sensor level, conventional methods cannot achieve successful synthetic aperture imaging.
Figure 1b demonstrates MASI’s field expansion capability. As the recovered wavefields from individual sensors are digitally padded and propagated back to the object plane, diffraction naturally expands each sensor’s field of view beyond its physical dimensions, effectively eliminating gaps in the final reconstruction in the rightmost panel in Fig. 1b (also refer to Supplementary Figs. S5-S6 and Note 1). Figure 1c shows the MASI sensor array, which consists of nine coded sensors arranged in a grid configuration. This multiscale architecture—breaking the imaging challenge into parallel, independent sub-problems—enables each sensor to operate without overlaps with others. During operation, piezoelectric stages introduce sub-pixel shifts ( ~ 1–2 μm) to individual sensors for ptychogram acquisition37, enabling complex wavefield recovery and pixel super-resolution reconstruction38,39 from intensity-only diffraction measurements. These shifts are orders of magnitude smaller than the millimeter-scale gaps between adjacent sensors, ensuring completely independent operation that could scale to long-baseline optical imaging, similar to distributed telescope arrays40 in radio astronomy. In MASI, sensors can be positioned on surfaces at different depths and spatial locations without requiring precise alignment. The design tolerance dramatically simplifies system implementation while maintaining the ability to synthesize a larger virtual aperture. The physical prototype in Fig. 1d demonstrates the system’s compact form factor and practical deployment in a reflection configuration, where the sensor array is placed at a 45-degree tilted plane.
The imaging model of MASI can be formulated as follows. We first denote \(O(x,y)\) as the object exit wavefield in real space. If the object is a 3D object with a certain non-planar shape, \(O(x,y)\) refers to its 2D diffractive field above the 3D object. With the 2D exit wavefield, one can digitally propagate it back to any axial plane and locate the best in-focus position to extract the 3D shape. With \(O(x,y)\), the wavefield arriving at the sth coded sensor with a distance \({h}_{s}\) can be written as:
$${W}_{s}\left(x,y\right)=O(x,y){ * {psf}}_{{free}}({h}_{s})$$
(1)
where \( * \) denotes convolution, and \({{psf}}_{{free}}({h}_{s})\) represents the free-space propagation kernel for a distance \({h}_{s}\). Because each sensor is placed at a laterally shifted position (\({x}_{s}\), \({y}_{s}\)) and has a finite size, we extract a portion of \({W}_{s}\left(x,y\right)\) that falls onto the sth coded sensor with m rows and n columns:
$${W}_{\!\!\!s}^{{crop}}\left(1:m,1:n\right)={W}_{\!\!\!s}({x}_{s}-\frac{m}{2}:{x}_{s}+\frac{m}{2},{y}_{s}-\frac{n}{2}:{y}_{s}+\frac{n}{2})$$
(2)
The intensity measurement acquired by the sth coded sensor can be written as:
$${I}_{s,j}\left(x-{x}_{j},y-{x}_{j}\right)={\left|\left\{{W}_{s}^{{crop}}(x,y)\cdot {{CS}}_{s}(x-{x}_{j},y-{x}_{j})\right\}{ * {psf}}_{{free}}(d)\right|}^{2}$$
(3)
where \({{CS}}_{s}\left(x,y\right)\) represents the coded surface of the sth coded image sensor, and ‘\({ * {psf}}_{{free}}(d)\)’ models the free-space propagation over a distance d between the coded surface and the sensor’s pixel array. Here, the subscript j in \({I}_{s,j}\left(x,y\right)\) represents the jth measurement obtained by introducing a small sub-pixel shift (\({x}_{j}\), \({y}_{j}\)) of the coded sensor using an integrated piezo actuator (Methods). Physically, this process encompasses two main steps. The wavefield \({W}_{s}^{{crop}}\) is first modulated by the known coding pattern \({{CS}}_{s}\) upon arriving at the coded surface plane, and then the resulting wavefield propagates a short distance d before reaching the detector.
With a set of acquired intensity diffraction patterns \(\{{I}_{s,j}\}\), the goal of MASI is to recover the high-resolution object wavefield \(O(x,y)\) that surpasses the resolution achievable by a single detector. Reconstruction occurs in two main steps. First, the cropped wavefield \({W}_{s}^{{crop}}(x,y)\) is recovered from measurements \(\{{I}_{s,j}\}\) using the ptychographic phase-retrieval algorithm41,42. The recovered wavefield is then padded to its original un-cropped size, forming \({W}_{s}^{{pad}}\left(x,y\right)\). Next, each padded wavefield is numerically propagated back to the object plane in real space. The accurate positioning of each coded sensor (\({x}_{s}\), \({y}_{s}\), \({h}_{s}\)) is critical for proper alignment and is determined through a one-time calibration experiment (Methods and Supplementary Note 2). Using these calibrated parameters, individual object-plane wavefields are aligned and coherently fused into a single high-resolution reconstruction through computational wavefield synchronization:
$${O}_{{recover}}(x,y)={\sum }_{s}[({{e}^{i\cdot {\varphi }_{s}}\cdot W}_{s}^{{pad}}\left(x,y\right)){ * {psf}}_{{free}}({-h}_{s})]$$
(4)
where \({\varphi }_{s}\) is the phase offset for the sth coded sensor. Supplementary Note 3 details our iterative computational phase compensation method that adjusts the unknown \({\{{\varphi }_{s}}\}\) to maximize the integrated energy of the fused reconstruction\(\,{O}_{{recover}}\). Supplementary Fig. S7 demonstrates the principle underlying this approach, showing that for objects with both brightfield and darkfield contrast, the total synthesized intensity consistently reaches its maximum when all sensors have their correct phase offsets. This behavior enables our coordinate descent algorithm in Supplementary Fig. S8, which sequentially optimizes each sensor’s phase while maintaining computational efficiency. The effectiveness of this approach is validated in a simulation study in Supplementary Fig. S9, which shows that our method successfully recovers high-fidelity reconstructions from severely distorted initial states, achieving near-zero errors compared to ground truth objects. The proposed computational phase synchronization approach parallels wavefront shaping techniques in adaptive optics30,31,32,33,34, ensuring constructive interference and maximum energy recovery. Alternative optimization metrics, such as darkfield minimization or contrast maximization, can also be employed depending on the imaging context. The real-space coherent synchronization in Eq. (4) leads to a resolution enhancement that is unattainable by any single receiver alone, effectively synthesizing a larger effective aperture. By decoupling the phase retrieval and sensor geometry requirements, MASI accommodates flexible sensor placements with minimal alignment constraints, realizing a practical platform for high-resolution, scalable optical synthetic aperture imaging.
Experimental characterization
To evaluate and characterize the performance of MASI, we conducted experimental validation under both transmission and reflection configurations. In Fig. 2, we employed a transmission configuration using point-like emitters as test objects. As shown in Fig. 2a, the MASI prototype was positioned to capture diffraction patterns from the single-mode fiber-coupled laser, with piezo actuators enabling small lateral shifts of the sensor array (~1 µm shift per step) to ensure measurement diversity for ptychographic reconstruction. The point-source object served two purposes: it provided validation of resolution improvements through analysis of reconstruction, and it enabled calibration of each sensor’s relative position and distance parameters (\({x}_{s}\), \({y}_{s}\), \({h}_{s}\)). Figure 2b presents the zoomed-in views of the reconstructed complex wavefields from individual sensors of the MASI. The insets of Fig. 2b also show the full fields of view of reconstructions (labeled as ‘Full FOV’). These recovered wavefields exhibit distinct fringe patterns corresponding to their respective positions relative to the point object.
a Schematic of using MASI in a transmission configuration. The numbered coded sensors (1–9) are positioned on an integrated piezo stage that introduces controlled sub-pixel shifts. b Complex wavefields recovered from individual sensors of MASI. The color map presents the phase information from -π to π. The wavefields shown in the main panels are zoomed-in view of the area indicated by the white box in the ‘Full FOV’ (full field of view) inset. c The iterative phase compensation process, where phase offsets of individual sensors are digitally turned to maximize the integrated intensity of the object. The nine color blocks represent the recovered phase offsets of individual sensors. d The recovered point source using Sensor 5 alone. The limited aperture of a single sensor results in a broadened point source reconstruction. e Result of coherent synthesis without proper phase offset compensation. f MASI coherent synthesis with optimized phase offsets obtained from the computational phase compensation process. The synthesized aperture provides substantially improved resolution. In addition to validating resolution gains, this point-source experiment also provides calibration data (\({x}_{s}\), \({y}_{s}\), \({h}_{s}\)) for each sensor, enabling precise alignment in subsequent imaging tasks. Supplementary Movie 1 visualizes the iterative phase compensation process for computational wavefield synchronization.
To coherently synthesize independent wavefields from different sensors, we implemented the computational phase compensation procedure in Fig. 2c, which iteratively optimizes each sensor’s global phase offset to maximize the integrated intensity at the object plane (Supplementary Note 3). The nine color blocks here represent the recovered phase offsets for the nine coded sensors, with different color hues indicating different phase values. In the synchronization process, we also employed field-padded propagation to extend the computational window beyond each sensor’s physical dimensions for robust real-space alignment. Figures 2d–f demonstrate the effectiveness of MASI implementation. A single sensor’s reconstruction in Fig. 2d shows the point spread function broadening due to its aperture’s diffraction limit. Unsynchronized multi-sensor fusion in Fig. 2e yields limited improvement. In contrast, MASI’s computational synchronization in Fig. 2f substantially improves resolution. The nine color blocks in the insets of Figs. 2e, f show the phase offsets before and after optimization. These results confirm that MASI’s computational synchronization effectively extends imaging capabilities beyond individual sensor limitations. Supplementary Movie 1 further illuminates the iterative phase synchronization process shown in Fig. 2c.
In Fig. 3, we validated MASI in a reflection-mode configuration using a standard resolution test chart positioned at a ~ 45-degree angle relative to the sensor array. The experimental setup is illustrated in Fig. 3a, where a laser beam illuminates the resolution target and the diffracted wavefield is captured by the MASI. Figure 3b shows the padded, propagated wavefields from all nine sensors at the object plane, each capturing complementary spatial information of the resolution target. Figure 3c shows the computational phase synchronization process, visualized through different phase offset combinations in the top right insets. One can tune the phase offsets of different sensors to generate additional contrast like the darkfield image in the top panel of Fig. 3c. Comparing the raw diffraction data (Fig. 3d) with single-sensor recovery (Fig. 3e) and synchronized MASI coherent fusion (Fig. 3f) reveals the dramatic resolution enhancement achieved through computational wavefield synthesis. The color variation in Figs. 3e, f represents depth information ranging from 2.21 to 2.23 cm, resulting from MASI’s tilted configuration relative to the resolution target. To accurately handle the tilted configuration in this experiment, we implemented a tilt propagation method as demonstrated in Supplementary Fig. S10 (Methods).
a Schematic of the MASI prototype capturing reflected wavefields from a standard resolution test chart. b Padded wavefields from all nine sensors propagated to the object plane, revealing distinct information captured by each sensor. Each sensor contributes unique high-frequency details of the resolution target, demonstrating the distributed sensing capability of MASI. c Computational phase synchronization process showing different visualization contrasts achieved by varying phase offsets. The 3×3 color blocks in each inset represent the phase values applied to individual sensors, with different hues indicating different phase values. d Raw lensless diffraction data of a zoomed-in region of the resolution target (same region highlighted in the Sensor-5 panel of b). e Single-sensor reconstruction with the raw data in d, resolving linewidths of ~2.19 µm. The color map represents depth information (2.21–2.23 cm) resulting from MASI’s tilted configuration relative to the resolution target. f MASI coherent fusion after computational synchronization, resolving 780 nm linewidths at ~2 cm ultralong working distance.
As shown in Fig. 3d, e, quantitative analysis through line traces confirms that MASI resolves features down to 780 nm at a ~2 cm working distance, whereas single-sensor reconstruction is limited to approximately 2.19 µm resolution. This combination of sub-micron resolution and centimeter-scale working distance represents an advancement over conventional imaging approaches, which typically require working distances of one millimeter or less to resolve features at this scale. In Supplementary Fig. S11, we also show different sensor configurations and the corresponding wavefield reconstructions, demonstrating how different sensor combinations affect the resolution. Particularly notable is how the coherent fusion of sensors arranged in complementary positions provides directional resolution improvements along the corresponding spatial dimensions, enabling tailored resolution enhancement for specific imaging applications.
Computational field expansion
Unlike conventional imaging systems, which are often constrained by lens apertures or sensor sizes, MASI leverages light diffraction to recover information at regions outside the nominal detector footprint. This field expansion can be understood as a reciprocal process of wavefield sensing on the detector. When our coded sensor captures diffracted light, it recovers wavefronts arriving from a range of angles, each carrying information about different regions of the object. These angular components contain spatial information extending beyond the physical sensor boundaries. During computational reconstruction, we perform the conjugate operation by back-propagating these captured angular components to the object plane. By padding the recovered wavefield at the detector plane before back-propagation, we effectively allow these angular components to retrace their propagation paths to regions outside the sensor’s direct field of view. This process is fundamentally governed by the properties of wave propagation, the same waves that carry information from extended object regions to our small detector can be computationally traced back to reveal those extended regions. The field expansion arises naturally as the padded recovered wavefields are numerically propagated from the diffraction plane to the object plane (Fig. 1b), effectively reconstructing parts of the object not directly above the sensor.
We also note that both the original and extended wavefield regions maintain the same spatial frequency bandwidth. For the wavefield directly above the coded sensor, the recovered spatial frequency spectrum is centered at baseband including the zero-order component. As we computationally extend to regions beyond the sensor through padding and propagation, the bandwidth remains constant but shifts to different spatial frequencies based on the angular relationship between the extended location and sensor position. This principle is demonstrated in Fig. 3, where Sensor 5 captures baseband frequencies for the resolution target directly above it, while peripheral sensors capture high-frequency bands of the same region through their extended fields. This distributed frequency sampling across multiple sensors is precisely what enables super-resolution in MASI.
Figure 4a demonstrates the field expansion capability when imaging a fingerprint using a single sensor in MASI. Using 532 nm laser illumination at a 19.5 mm working distance, a single 4.6 × 3.4 mm sensor captures only a small portion of the fingerprint’s diffraction pattern. By padding the reconstructed complex wavefield and propagating it to the object plane, we can expand the imaging area from the original sensor size of 4.6 × 3.4–16.6 × 15.4 mm (with 12 mm padding). Supplementary Movie 2 visualizes the field expansion process. To enhance visualization of surface features across this expanded field, the recovered phase maps undergo processing of background subtraction as illustrated in Supplementary Fig. S11a, which improves the contrast of the fine features. Figure 4b shows a 3D visualization of the expanded fingerprint phase map covering an area of 16.6 × 15.4 mm, with the inset highlighting resolved sweat pores. Supplementary Fig. S12 further demonstrates the versatility of this approach across different materials including plastic, wood, and polymer surfaces, each revealing distinct micro-textural details. This capability demonstrates how MASI’s computational approach transforms a small physical detector into a much larger virtual imaging system with enhanced phase contrast visualization.
a To expand the imaging field of view in MASI, we computationally pad the recovered wavefield at the diffraction plane and propagate it to the object plane. With different padding areas, the reconstruction region grows from the original 4.6 × 3.4 mm sensor size to 16.6 × 15.4 mm, allowing a much broader view of the fingerprint to be revealed without additional data acquisition. The color scale represents phase values from 0 to 1. b 3D visualization of the fingerprint across the expanded field, highlighting the detailed fingerprint ridges and the location of individual sweat pores. Supplementary Movie 2 visualizes the field expansion process via incremental padding.
Figure 5 further validates MASI’s capability for high-resolution, large-area phase-contrast imaging of a mouse brain section. Figure 5a displays the recovered complex wavefields from all 9 individual sensors in the MASI device, each capturing a 4.6 × 3.4 mm region. Figure 5b demonstrates the field-expanded recovery using just a single sensor (Sensor 3), where computational padding and propagation transform a small 4.6 × 3.4 mm sensor area into a comprehensive 17.0 × 13.4 mm phase-contrast visualization of the entire brain section. The recovered field of view is much larger than the physical sensor area highlighted by the green box in Fig. 5b. The speckle-like features visible outside the brain tissue arise from air bubbles in the mounting medium, a sample preparation artifact common when mounting large tissue sections. Figure 5c shows similar field-expanded recoveries from each individual sensor, with green boxes indicating their physical dimensions and locations. Figures 5d–f present a detailed comparison between MASI lensless raw data (Fig. 5d), single-sensor recovery (Fig. 5e), and MASI coherent synchronization of all sensors (Fig. 5f) for a region of interest in the brain section. The MASI coherent synchronization resolves the myelinated axon structure radiating outward from the central ventricle. This computational field expansion, combined with MASI’s ability to operate without lenses at long working distances, enables a paradigm for wide-field, high-resolution imaging that overcomes traditional optical system limitations43.
a Complex wavefields recovered at each of the coded sensors in a lensless transmission setup. Each individual sensor captures only a fraction of the object’s diffracted field. The dark regions present the gaps between individual sensors. b Real-space phase image of the brain section after propagation of Sensor 3’s recovered wavefield to the object plane, showing that even a single sensor image expands to cover the entire brain section. The color scale represents phase values from 0 to 1. c Field-expanded recovered from individual sensors in MASI. Each recovered phase image highlights distinct aspects of the same sample. d–f Comparative analysis of the region of interest (white box in b): raw lensless diffraction pattern in gray scale (d), single-sensor recovery (e), and MASI coherent synchronization of all sensors (f). The brightness of the recovered wavefields indicates amplitude (A), and hue indicates phase (θ), as defined by the color wheel. The coherent synchronization clearly resolves radiating myelinated axon bundles extending from the ventricle, while the single-sensor recovery shows limited resolution of these neural pathways, as highlighted in the insets.
The reported field expansion capability presents intriguing applications in data security and steganography. Since features outside the physical sensor area become visible only after proper computational padding and back-propagation, MASI creates a natural encryption mechanism for information hiding44. For example, a document could be designed where critical information—authentication markers, security codes, or confidential data—is positioned beyond the sensor’s direct field of view. When capturing the raw intensity image, this information remains completely absent from the recorded data, creating an inherent security layer where the very existence of hidden content is concealed. This is demonstrated in Fig. 5, where Sensor 3’s raw data and direct wavefield recovery give no indication that the computational reconstruction would reveal an entire brain section. Only an authorized party with knowledge of the correct wavefield recovery parameters, propagation distances, coded surface pattern, and padding specifications can computationally reconstruct and reveal this hidden content. This physics-based security approach offers advantages over conventional digital encryption by leaving no visible evidence that protected information exists in the first place.
Computational 3D measurement and view synthesis
Conventional lens-based 3D imaging typically relies on structured illumination techniques, such as fringe projection, speckle pattern analysis, or multiple-angle acquisitions45,46,47. Similarly, perspective view synthesis often requires light field cameras with microlens arrays or multi-camera setups to capture different viewpoints46,47,48. Here, we demonstrate that MASI enables lensless 3D imaging, shape measurement, and flexible perspective view synthesis through computational wavefield manipulation. Unlike the conventional lens-based approaches, MASI extracts three-dimensional information and generates multiple viewpoints through pure computational processing of the recovered complex wavefield.
The concept behind MASI’s 3D imaging capability leverages the fact that a complex wavefield contains the complete optical information of a 3D scene. For 3D shape measurements, we digitally propagate the recovered wavefield to multiple axial planes throughout the volume of interest. At each lateral position, we evaluate a focus metric that quantifies local sharpness across all axial planes49. By identifying the axial position with maximum gradient value for each lateral point, we create a depth map where each pixel’s value represents the axial coordinate of best focus. This approach effectively transforms wavefield information into precise height measurements, as objects at different heights naturally focus at different propagation distances. The resulting 3D map reveals microscale surface variations across the entire field of view.
For perspective view synthesis, MASI employs a different approach than conventional light field methods. We first propagate the reconstructed wavefield from the object plane at real space to the reciprocal space using Fourier transformation. In the reciprocal space, different angular components of light are spatially separated. By applying a filtering window to select specific angular components and then inverse-propagating back to real space, we synthesize images corresponding to different viewing angles. Shifting this filtering window effectively changes the observer’s perspective, allowing virtual ‘tilting’ around the object for visualization.
Figure 6 demonstrates MASI’s 3D measurement capabilities. In Fig. 6a, we show the recovered wavefield of a bullet cartridge using MASI. The recovered complex wavefield contains rich phase information that encodes the object’s 3D topography. Figure 6b shows an all-in-focus image produced by our digital refocusing approach that combines information from multiple depth planes into a single visualization. This image provides both sharp features and depth information. Figure 6c shows different viewing angles generated by shifting the filtering window position in reciprocal space, revealing surface features that might be obscured from a single perspective. Figure 6d shows the recovered 3D height map revealing the firing pin impression and microscopic surface features, capturing critical ballistic evidence that could link a specific firearm to a cartridge casing. To demonstrate the versatility of MASI’s 3D measurement capabilities, Fig. 6e shows objects with varying feature heights and dimensions, including a LEGO brick with raised lettering, a coin with fine relief details, and a battery with subtle topography. The ability to generate 3D measurements and synthetic viewpoints through purely computational means represents an important advantage over conventional lens-based 3D imaging approaches.
a MASI recovered wavefield of a bullet cartridge, with zoomed region showing detailed wavefield information. The brightness of the recovered wavefields indicates amplitude (A), and hue indicates phase (θ), as defined by the color wheel. b All-in-focus image with depth color-coding from 0.4 mm (blue) to 4.7 mm (red), providing comprehensive visualization of the cartridge’s 3D structure. c Synthetic perspective views generated by computationally filtering the wavefield at the reciprocal space, where different angular components of light are spatially separated. The color indicates the depth map as in (b). d MASI recovered 3D map of the bullet cartridge, clearly revealing the firing pin impression and surface details for ballistic forensics. The color scale presents height from 0 to 100 µm. e Demonstration of MASI’s 3D measurement capabilities across objects with varying features. Left: LEGO brick with color scale indicating height from 0 to 120 µm; center: a coin with color scale indicating height from 0 to 100 µm; right: a battery with color scale indicating height from 0 to 30 µm. In each case, the 3D topography is reconstructed without mechanical scanning, highlighting MASI’s potential for non-destructive testing and precision metrology. Supplementary Movies 3-4 visualize the 3D focusing process. Supplementary Movie 5 visualizes the generated different perspective views of the 3D object post-measurement.