Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (2024)

1. Introduction

Limited by the diffraction limit, high-resolution imaging of distant targets has always been a challenging subject. Traditional optical systems enhance resolution by increasing the aperture, but such systems have significant drawbacks, including massive resource consumption, difficulties in lens fabrication, and the need for extremely precise mechanical adjustments. In 2012, Lockheed Martin, in collaboration with Duncan’s team at the University of California, proposed a new type of telescopic imaging technology [1,2], which they named SPIDER (Segmented Planar Imaging Detector for Electro-Optical Reconnaissance). Compared to traditional systems, SPIDER draws on the technique of coherent measurement used in astronomy. It employs a microlens array and a photonic integrated circuit (PIC) to replace the telescope interferometer array, achieving integrated interferometric imaging of light. This type of integrated interferometric imaging system significantly reduces the size, weight, and power consumption, and holds considerable potential for applications in fields such as astronomical observation and remote sensing imaging. Between 2015 and 2018, Duncan’s team developed two generations of photonic integrated circuits (PICs) [3,4,5,6]. On this foundation, they constructed a SPIDER imaging demonstration experiment. The experiment successfully validated the principle of the integrated interferometric imaging system. However, due to factors such as low spatial frequency sampling rates and SNR, the actual imaging quality was relatively poor. Subsequently, many researchers have conducted studies in three main areas: imaging principles, baseline arrangement methods, and image reconstruction algorithms.

In the area of imaging theory research, in 2018, Yu et al. designed an integrated interferometric imaging system that combined wide-field detection and narrow-field target tracking [7]. The system was structured into two modes: a low-resolution, wide-field detection mode, and a high-resolution, narrow-field tracking mode. Simulation results demonstrated the system’s effective capabilities in detecting and tracking targets. In 2019, Lv et al. [8] performed simulations on the classic SPIDER system to calculate the optimal imaging distance, phase noise tolerance, and precision requirements for optical waveguide fabrication, confirming the application value of integrated interferometric imaging systems for Earth remote sensing. In 2023, Ge et al. employed an asymmetric baseline design to propose a concept for a 3D imaging system based on integrated interferometry [9]. This proposed system offers a new passive 3D positioning and imaging solution with greater integration, speed, and convenience for the fields of planetary, lunar, and deep-space exploration.

In the domain of baseline arrangement methods, Yu et al. [10] proposed a chessboard-style microlens structure and baseline arrangement method in 2018. This approach uses a rectangular envelope for the microlens array, thus enhancing spatial utilization and allowing for the assembly of multiple segments. Subsequently, in 2023, Yu et al. [11] proposed a method to calculate the phase difference in the complex coherence coefficient of two interferometric signals by comparing the extremities of interference fringes within an area where the envelope shape approximates a linear variation. In 2019, Gao et al. [12] proposed a hierarchical multi-level sampling structure, and in 2021, they introduced an improvement in the form of a non-uniform hierarchical multi-level sampling structure [13]. By designing a non-uniform microlens array, this method effectively increased the collection of mid- and low-frequency information, thereby enhancing image quality. Later, in 2023, Deng et al. [14] designed silicon PICs and silicon nitride PICs specifically for the hierarchical multi-level sampling method, accelerating the practical application of this technology. In 2020, Ding et al. [15] introduced a hexagonal baseline arrangement method. This method encompasses both a lateral pairing approach and a longitudinal pairing approach, and it demonstrates that the point spread function of the hexagonal arrangement is superior to that of traditional methods. In 2021, Hu et al. [16] proposed a dense azimuthal sampling method along with a discrete spectral matrix reconstruction technique. This approach allows for continuous sampling of all integer multiples of the base frequency within the highest frequency range, including zero frequency, along the baseline direction. In 2018, Liu et al. initially proposed a design for an integrated interferometric imaging system based on compressed sensing [17], and further improved this approach in 2020 [18]. The method modified the relationship between the number of sampled spatial frequencies and the number of lenses from linear to quadratic. In 2022, Liu revised the baseline arrangement design method to a fully connected form [19], where each microlens can form an interferometric baseline with every other microlens. However, a limitation of the compressed sensing-based integrated interferometric imaging system is its inability to perform snapshot imaging. In 2022, Hiyam Debary et al. demonstrated the existence of solutions for the one-dimensional aperture pairing optimization problem [20].

In the field of signal processing and image reconstruction, optimizing the image reconstruction process can also enhance image quality. In 2021, Chen et al. proposed an image reconstruction algorithm based on maximum entropy with entropy priors [21], as well as a simplified image reconstruction algorithm based on modified entropy [22]. These methods draw from signal recovery algorithms used in interferometric measurement. The researchers proposed using maximum entropy and modified entropy as the basis for constructing data penalty terms and combined them with Newton’s method. This algorithm effectively suppresses the ringing effects caused by insufficient high-frequency sampling.

In addition to the inadequacies in spatial frequency domain sampling and reconstruction algorithms, another key factor limiting the practical application of integrated interferometric imaging technology is the low light energy received by the system and the consequent low SNR. In 2022, Zhang et al. [23] analyzed the relationship between baseline length and energy intensity in integrated interferometric imaging systems under static conditions, and qualitatively explored the impact of noise on image quality. However, there has not yet been any research specifically addressing methods to improve the SNR in integrated interferometric imaging systems. Traditional remote sensing cameras use TDI technology to extend the imaging time, thereby improving the SNR [24]. However, the design of the interferometric arms in integrated interferometric imaging systems requires a high angular density, which cannot meet the low drift angle requirements of conventional TDI technology. Therefore, TDI cannot be directly applied to the operation of integrated interferometric imaging systems in orbit.

To address this issue, we will combine the concept of TDI technology with the char-acteristics of integrated interferometric imaging systems. A novel design method for a multi-waveguide merged multi-frame integration (MWMMFI) system has been proposed. The MWMMFI system innovatively utilizes non-uniform sampling timelines. Based on the results of static multi-waveguide merging, the non-uniform sampling time design can achieve accurate extraction of signals from multiple interference arms at the same frame. This method merges optical waveguide elements based on the energy distribution, location, and image shift speed of the focused light spots. Subsequently, it calculates the sampling time axis. Orthogonal detection signals are extracted according to the merging scheme and time axis. Mutual intensity is calculated by first accumulating and then differentiating. Simulation results demonstrate that the approach proposed in this paper effectively suppresses noise and enhances the SNR.

The main content of this paper is organized as follows:

  • The imaging link of the integrated interference imaging system is introduced, which is the theoretical basis of the system imaging principle and subsequent simulation test.

  • The static design part, dynamic design part and signal processing part of the MWMMFI system are described.

  • A simulation test was designed according to parameters of silicon avalanche photodiode and on-orbit imaging parameters. The subjective visual effect and objective index of image reconstruction were evaluated and analyzed.

2. Imaging Link for an Integrated Interference Imaging System

As shown in Figure 1, the imaging link of the integrated interference imaging system is mainly divided into the following seven parts. The first part is from the object plane to the microlens array plane. In the second part, the optical field is coupled into the optical waveguide array from the aperture array, and then the two microlenses can form the interference baseline. The third part is to split the array waveguide grating (AWG) to obtain monochromaticity. In the fourth part, the phase modulation device is used to precisely adjust the baseline phase of each group to meet the interference conditions. The fifth part is to use multi-mode interferometer (MMI) for interference and output four interference optical signals. The sixth part is photoelectric conversion using photodetector, and the obtained orthogonal photocurrent signal is stored in the signal processing module. In the seventh part, the spatial spectrum sampling matrix is composed of all the baseline measurement results, and the image is reconstructed by inverse fast Fourier transform (IFFT).

The light field propagation process of the integrated interference imaging system can be abstracted into the model shown in Figure 2. Firstly, the optical field propagation process of light from the object plane to the aperture plane is considered, that is, the optical field propagation from the ξ η plane to the x y plane in Figure 2. This process can be regarded as Fraunhofer diffraction, and then the optical field distribution S ξ , η on the object surface and the optical field Q x , y at any point on the aperture surface satisfy the following relation [13]:

Q x , y = j λ z S ξ , η e j 2 π λ z 1 + x 2 + y 2 2 z e j 2 π λ z x ξ + y η d ξ d η

In the Equation (1), z is the object distance, and the system is about the orbit height when working in orbit. λ is the wavelength. x , y is the coordinate of the microlens array surface. ξ , η is the coordinate of the object surface.

When the light field passes through the microlens array, it modulates the light field. The baseline is defined at this point as the line between the centers of the two microlenses, whose length is indicated by B in Figure 2. The baseline has direction in addition to length. This behavior of two subapertures forming a baseline is called pairing.

The light field modulation function of a microlens array can be expressed as follows:

t l x , y = e j π λ f x m 2 + y n 2 c i r c x m 2 + y n 2 a

In Equation (2), m , n is the center coordinate of any subaperture, f is the focal length of the focusing lens, and a is the radius of the focusing lens. c i r c is the circular function. So, the plane Q x , y immediately behind the focusing lens is as follows:

Q x , y = Q x , y t l x , y

Due to the end-coupling, the propagation of the light field from the microlens array to the waveguide array can be regarded as a Fresnel diffraction process. Based on Equation (3), the optical field distribution at the optical waveguide array can be written as follows:

R α , β = j λ l Q x , y e j k l e j k 2 l α x 2 + β y 2 d x d y

In Equation (4), l is the image distance, which is approximately equal to the focal length. k is the wave number. α , β is the coordinate of the optical waveguide array surface.

On the R α , β plane, the optical field value coupled to a pair of optical waveguides constituting the baseline can be expressed as follows:

U m α m , β m = U ˜ m α m , β m exp ω m t + φ m

U n α n , β n = U ˜ n α n , β n exp ω n t + φ n

where m , n represent the serial number of different optical waveguide elements. U ˜ stands for amplitude. ω stands for angular frequency. φ stands for the initial phase.

After passing through the AWG, the incident light is divided into several narrow bands as follows:

ω m ω n

Through the phase regulator, the phase of the two beams is precisely adjusted to meet the interference condition. Then interference occurs in the MMI, and four interference optical signals are output. The intensity value of each of two output optical signals differs by 90° in phase. The light intensity value detected by the four-channel photodetector is calculated as follows:

I 1 m , n = u 1 u 1 * = 1 4 U ˜ m + 1 4 U ˜ n U ˜ m U ˜ n 2 sin Δ φ m n

I 2 m , n = u 2 u 2 * = 1 4 U ˜ m + 1 4 U ˜ n + U ˜ m U ˜ n 2 sin Δ φ m n

I 3 m , n = u 3 u 3 * = 1 4 U ˜ m + 1 4 U ˜ n U ˜ m U ˜ n 2 cos Δ φ m n

I 4 m , n = u 4 u 4 * = 1 4 U ˜ m + 1 4 U ˜ n + U ˜ m U ˜ n 2 cos Δ φ m n

In the above equation, u is the four-way interference output light field of MMI. Δ φ is the phase difference between two incident light channels.

The photocurrent signal P m , n after photoelectric conversion can be simply expressed as follows:

P m , n = R A P D G A P D I m , n

where R A P D is the responsivity of the avalanche photodiode. G A P D is the gain.

According to the Van Cittert–Zernick theorem and the interference law of partially coherent light [25], the real and imaginary parts of the mutual intensity J can be expressed as follows:

2 Re J α m , β m ; α n , β n = U ˜ m U ˜ n 2 cos Δ φ m n

2 Im J α m , β m ; α n , β n = U ˜ m U ˜ n 2 sin Δ φ m n

The mutual intensity can be solved by the difference of Equations (8)–(11):

J α m , β m ; α n , β n = 1 4 P 4 m , n P 3 m , n + j 4 P 2 m , n P 1 m , n

According to the Van Cittert–Zernick theorem, if enough baselines of different directions and lengths can be collected, the corresponding mutual intensities will be obtained. The spatial spectrum sampling matrix is formed. The surface light intensity distribution can be obtained by inverse Fourier transform [16].

S ξ , η = F 1 m , n J α m , β m ; α n , β n u = α m α n λ z ; v = β m β n λ z

The terms u and v in Equation (16) are the coordinates in the spatial frequency domain.

3. Design of the Multi-Waveguide Merged Multi-Frame Integration (MWMMFI) System

The problems of an insufficient effective photocurrent signal and low SNR appear in the photoelectric conversion step, that is, Equation (12). This results in inadequate spatial frequency sampling accuracy. Traditional integrated interference imaging system can only make single frame exposures, and extending the exposure time will not only increase the accumulation of effective photocurrent but also aggravate the motion degradation. If a system scheme is designed, the motion degradation can be suppressed within a certain range, and the accumulation of effective photocurrent signals can be multiplied. This will greatly improve the practicability of integrated interference imaging systems.

3.1. Limitations of Traditional Time-Delay Integration Techniques

Traditional TDI technology has been widely used in space optical imaging. As shown on the left side of Figure 3, TDI CMOS technology is used as an example to describe the principle. The positioning direction of the linear array CMOS detector is parallel to the relative motion direction of the satellite platform. The position of the image points of the ground objects is precisely synchronized with the position of the multi-line CMOS array. Finally, the image signal is added row by row. Because the image points at different moments of the ground object strictly match the same column of CMOS pixels, the motion degradation of the signal sampling stage is limited to a single frame exposure time. After accumulation, it can achieve several times the light energy collected by a single-row CMOS linear array detector with minimal motion degradation.

However, as shown on the right in Figure 3, the focused spot movement trajectory of the integrated interferometric imaging system cannot strictly follow the arrangement direction of the optical waveguide elements. This is because the integrated interference imaging system must ensure that the baseline direction is angle-dense when sampling the dense spatial spectrum. That is, the interfering arms must point in as many different directions as possible. Otherwise, the spatial frequency of the missing direction will be under-sampled. The problem of spot motion offset is solved by the MWMMFI design in detail.

3.2. Static Design

In 2006, Toyoshima M found that the optical waveguide coupling efficiency could be improved by properly adjusting the size of the Airy disk [26]. Thanks to this idea, this section determines the merging scheme of optical waveguide elements by designing the dimension ratio of the Airy disk and waveguide.

Firstly, the design contents of the arbitrary interference arm, arbitrary microlens, single field of view and instantaneous moment are considered. This part is called static design.

Suppose that the aperture of a single microlens is D L and the focal length is f. As shown in Figure 4, a microlens satisfying the optical waveguide coupling condition focuses light at infinity onto the optical waveguide array. The optical waveguide array consists of square optical waveguide elements in an orthogonal arrangement. Each optical waveguide element interferes with another optical waveguide element paired with the baseline. The four-way interference signals are detected by the avalanche photodiode and stored in the signal processing module for subsequent signal processing operations. The focused spot can be regarded as an Airy disk close to the diffraction limit, and the Airy disk radius can be calculated by the following formula [26]:

R A i r y = 1.22 λ D L f

As shown in Figure 4, the dimensions of the square optical waveguide element are l W G × l W G . For the convenience of expression, the ratio of the Airy disk radius to the optical waveguide element size is defined as the merging number C, that is calculated as follows:

C = R A i r y l W G

In this study, C > 1 . This indicates the use of a large-size Airy disk and small size of the optical waveguide element. Additionally, the final optical waveguide element merging scheme must meet the following two basic conditions:

  • In order to make the total effective light signal obtained after the merger as much as possible, the Airy disk size range should contain as many optical waveguide elements as possible.

  • In order to ensure the accuracy of the energy calculation during the in-orbit motion imaging after merging, all the waveguide elements involved in merging must be located in the main maximum of the Airy disk at any time.

Based on this, the optical waveguide element merging design under different merging numbers is carried out. The specific design scheme can be divided into two steps: constructing the base square and traversing the optical waveguide element.

Step 1: The optical waveguide element merging base is constructed. The side length of the base square can be calculated by the following equation:

l B = 2 2 R A i r y 2 l W G l W G

The symbol ⌊⋅⌋ represents the function of rounding down.

Step 2: In order to meet the two basic design conditions, it is necessary to add or subtract optical waveguide elements on the basis of the base square. The boundary criterion is used as the judging condition for the increase or decrease in optical waveguide elements. The optical waveguide element merging scheme is designed using the traversal method.

As shown in Figure 5, in the base square, the coordinate origin is the center of the square. Assume that the coordinates of any optical waveguide element in the base square are m W G , n W G . It then traverses the inner and outer layers of optical waveguide elements in the base square (red dashed line in Figure 5). Through the process, the following judging conditions are adopted as the basis for whether to include the final merging scheme:

Δ n = R A i r y 2 m W G l W G 2 n W G l W G > l W G 2

Δ m = R A i r y 2 n W G l W G 2 m W G l W G > l W G 2

The physical meanings of Δ m and Δ n in the above two formulas are the distance between the edge of any optical waveguide element and the edge of the Airy disk in the direction of the optical waveguide element arrangement. The significance of these two inequalities is to ensure that no optical waveguide element outside the main maximum will be involved in merging at any time when the light spot moves in any direction. In the design, two layers of optical waveguide elements inside and outside the base square are taken as the area to be traversed. The outer area to be traversed is represented in purple, and the inner area to be traversed is represented in yellow. The optical waveguide element inside the yellow area does not need to be judged and can be directly skipped. Therefore, these areas are defined as safe areas, which are represented in green. Finally, the relative coordinates of all optical waveguide elements satisfying the judgment conditions are sorted into the in-block merging matrix S. The white optical waveguide element in the Figure 5 does not participate in imaging.

S = S 1 S N = ( m W G , 1 , n W G , 1 ) ( m W G , N , n W G , N )

For the convenience of understanding, as shown in Figure 6, C = 3 and C = 4 are taken as examples to explain the construction process of the merging scheme in detail.

As shown in Figure 6a, when C = 3, the base square is first determined using Equation (19). Then, determine the traversal area and the safe area. Next, the traversal begins. The traversal algorithm abandons all peripheral traversal regions, as well as the four corners of the base square. Therefore, when the radius of the Airy disk is about three times the side length of the optical waveguide element, the final merging scheme is the pattern of the base square after the four corners are eliminated.

However, when C = 4, the final scheme obtained by the same algorithm is completely different. As shown in Figure 6b, the optical waveguide elements in the inner traversal area are completely preserved at this time. The outer layer only retains the intermediate optical waveguide elements in four orientations. Figure 7 shows the static design results when C = 2, 3, and 4. The green color block in the solid red box in the figure is the result of the merging.

3.3. Dynamic Design

At this point, the static design part is complete. When the system is in orbit, the position of the focused spot on the optical waveguide plane of the same ground object is different at different times. Therefore, a dynamic design is required. The dynamic design part needs to solve the following three problems:

  • The sampling time of each frame. That is, the question of when to start sampling.

  • The integration time for a single frame. That is, how long to sample each time.

  • After the sampling is completed, how to extract the signal at the correct position in the multi-frame signal matrix.

All optical waveguide arrays correspond to any interference arm and any microlens. As shown in Figure 8, we can refer to the research experience of imaging electronics and agree that the upper left corner is the origin position. Since the optical waveguide structures discussed in this paper all have square cross sections, symmetry between region I and region II is observed. The research results obtained when the velocity vector is located in region I can be directly applied to the case where the velocity vector is located in region II by simple coordinate transformation. Therefore, this section only details the dynamic design scheme for the velocity vector angle θ a between 0° and 45°.

3.3.1. Transition Time

The residence time of the focused spot on one optical waveguide element is different for different interference arms. The time when the focused spot is transferred to the adjacent row or column is defined as the row transition time Δ t R and the column transition time Δ t C :

Δ t R = l W G v cos θ a

Δ t C = l W G v sin θ a

where v is the modulus of the velocity vector. The initial transition times T 0 R and T 0 C are defined below. It means the minimum value of Δ m and Δ n in the static design, and the corresponding time under the action of the spot velocity v. According to the static design results and symmetry with square optical waveguides, the minimum Δ m and Δ n values are equal, that is, Δ m = Δ n = Δ 0 . Therefore, the initial transition time can be expressed as follows:

T 0 R = Δ 0 v cos θ a

T 0 C = Δ 0 v sin θ a

3.3.2. Sampling Time and Single Frame Integration Time

The sampling time and single frame integration time are greatly affected by the placement direction of the interferer arm. In this section, for different θ a , the design idea of the sampling time and single frame integration time is analyzed from simplification to complexity.

As shown in Figure 9a, when θ a = 0 , the placement direction of the interference arm coincides with the moving direction of the focusing spot. This is a simple, extreme case. Similar to the TDI technique, the spot is shifted strictly in the row direction and theoretically does not illuminate other columns. In this case, the column transition time is infinite and the row transition time is the minimum of all θ a . Each row transition moment can be the start moment of a single frame integration.

Let θ a increase so that the column transition time is no longer infinite. However, the row transition time and column transition time are not all positive integers. The imaging time presents a misalignment state. The row transition moment and column transition moment will definitely split the timeline, as shown in Figure 9c. This situation will be discussed in more detail later.

As θ a continues to increase, the row transition time increases and the column transition time decreases. When θ a = 45 , as shown in Figure 9b, the critical case is reached. This is again a simple, extreme case. In this case, the row transition time is equal to the column transition time, and each row and column transition moment can be the start moment of the single frame integration.

The complex interval θ a 0 , 45 is mainly considered. At this point, the misalignment of row transition times and column transition times will split the timeline into uneven time periods. It is shown as the red dashed line in Figure 9c. So, it is necessary to design a time axis that includes the imaging moment and the single frame integration time. It must meet the following requirements:

  • The single frame integration time is the same between different interference arms in order to avoid the inhom*ogeneity of the signal between the arms.

  • The single frame integration time should be as long as possible in order to improve the amount of effective optical signal collected.

  • Each segment of row and column invariant time is sampled once.

It can be observed that after each segmentation, a shorter time period and another longer time period are formed. This section adopts the design idea of discarding short segments and retaining long segments.

Observing Figure 9c, it can be seen that the split density of the time axis depends on the smaller of the row and column transition times. Additionally, the discarded time periods are all less than half of the smaller value of the row and column transition time. Therefore, the single frame integration time T I is designed to be half of the minimum row and column transition time of all interference arms.

T I = l W G 2 v

By designing the single frame integration time in this way, design requirements (1) and (3) can certainly be satisfied.

Before designing the sampling time of each interference arm, the system maximum integral grade G should be determined. The split timeline is calculated according to Equations (23) and (24). Each time period is compared with T I , and time periods smaller than T I are discarded. Mark the start of these periods as follows:

t 1 , t 2 , t 3 , , t G

The single frame integration time T I is added to Equation (28) to form the final sampling time axis T a l l :

T a l l = t 1 , t 1 + T I , t 2 , t 2 + T I , t 3 , t 3 + T I , , t G , t G + T I

T a l l is the green color block in Figure 9c. Before the system works, the integral circuit of each interference arm can be set according to its own time axis. In addition, the time axis is also an important parameter in the subsequent position extraction of optical waveguide elements.

3.3.3. Location Extraction of Optical Waveguide Elements from Multiple Frames

In this part, the decimation method of multi-frame orthogonal signal will be derived. The in-block merging matrix S obtained from the static design is the relative coordinate. S describes the decimation of optical waveguide elements inside a merging block. In different frames, the position of the merging block moves as a whole, which cannot be extracted directly. However, the optical waveguide array coordinate system is an absolute coordinate system. Therefore, S needs to be extended to the whole field of the view block first. Then, consider the time axis. A multi-frame optical waveguide location extraction scheme is formed, which takes time and merging block coordinates as input and absolute optical waveguide coordinates as output.

As shown in Figure 10, the origin of the coordinate system of the optical waveguide array corresponding to a single microlens is O. Additionally, the array of all optical waveguide elements corresponding to a single microlens is the total optical waveguide array. The merging block in Figure 10 is the circ*mscribed large square of the Airy disk in Figure 7. A merging block consists of 2 C × 2 C optical waveguide elements. The total optical waveguide array is assumed to consist of M a l l × N a l l optical waveguide elements. The number of merged blocks is assumed to be M B × N B . The center coordinate of the merging block is ψ m B , n B . The in-block merging matrix in the total optical waveguide array coordinate system can be expressed as follows:

Ψ m B , n B = S 1 S N + 1 1 ψ m B , n B

where Ψ m B , n B is a matrix with N rows and one column.

Considering the time axis, Equation (30) is extended to multiple frames as follows:

Ψ m B , n B , t = Ψ m B , n B + 1 1 0 0 t T 0 R Δ t R + 0 0 1 1 t T 0 C Δ t C

Equation (31) is the multi-frame optical waveguide position extraction scheme. When the corresponding merging block needs to be extracted for signal processing, the absolute coordinates can be obtained by substituting the time t that conforms to the time axis T a l l and the initial center position coordinate ψ m B , n B of the merging block. The extracted signal value can be directly processed by subsequent signal accumulation.

3.4. Signal Processing

The signal processing process of the MWMMFI system is different from that of the traditional system. MWMMFI system adds the multi-frame accumulation operation of a single signal before calculating the signal difference. The process can be described by the following equation:

J α m , β m ; α n , β n = 1 4 m B , n B Ψ m B , n B , t P 4 m B , n B , t m B , n B Ψ m B , n B , t P 3 m B , n B , t + j 4 m B , n B Ψ m B , n B , t P 2 m B , n B , t m B , n B Ψ m B , n B , t P 1 m B , n B , t

It is worth mentioning that the signal processing of the MWMMFI system has a certain effect on noise suppression compared with the traditional method. The measured values of avalanche photodiodes can be briefly described by the following mathematical model [23]:

P ^ = P r e a l + σ n o i s e + κ

where P ^ is the actual detection value of optical power by avalanche photodiodes at different positions. P r e a l is the theoretical truth value. σ n o i s e is the RMS value of the noise. κ is the random noise value. The avalanche photodiode noise value can be regarded as a random number centered on σ n o i s e and in the range of κ.

It is assumed that the manufacturing errors among optical waveguides have good consistency. According to Equation (33), as the number of optical waveguide elements involved in merging increases, κ will tend to zero in the accumulation. In this case, the RMS value of the noise will also accumulate. Additionally, the subsequent difference calculation will cause the RMSs to cancel each other out. The error elimination and the multiplier accumulation of the difference between the true values are realized.

4. Simulation Experiment Design, Results, and Analysis

The purpose of the simulation tests in this section is to verify the effectiveness of the MWMMFI system for increasing the effective optical signal cumulants, improving the image SNR, and suppressing noise.

Noise is set according to reference [23] and the S12427-02 silicon-based avalanche photodiode produced by Hamamatsu.

The image simulation and reconstruction algorithm follows the link described in Section 2 of this paper. This simulation experiment process refers to the research of Wu D et al. [27] and adds parameter modules belonging to the MWMMFI system on top of its simulation process. The simulation process of the MWMMFI system is shown in Figure 11. The blue steps in Figure 11 represent the unique modules of the MWMMFI system.

The original image of the US Navy standard resolution board for the simulation experiment is shown in Figure 12. The imaging environment parameters are set as shown in Table 1.

The test results are evaluated by combining subjective visual effects and objective indicators. The subjective visual effect is the method of observing the reconstructed image by the naked eye. It mainly investigates whether the stripes on the resolution board can be clearly distinguished. The peak signal-to-noise ratio (PSNR) was used as the objective index. This is a common evaluation metric in the field of image processing. The larger the PSNR, the better the image quality.

4.1. Relationship between Signal and Noise with Different Merging Numbers

Table 2 lists the normalized signal and noise values for a single avalanche photodiode. It can be seen that the SNR of a single detector in a single frame of the MWMMFI system is lower than that of the traditional system (C = 1). In addition, with the increase in the merging number, the effective photocurrent signal value attenuates seriously. This is because the aperture size of the focusing lens does not change, and the light energy transmitted into a single aperture does not increase. However, the size of the focused spot becomes larger; so, the light energy obtained by each optical waveguide element is reduced.

On the other hand, the shot noise (photon noise) has a square root relationship with the effective photocurrent. When the effective photocurrent signal is very weak, the shot noise will be stronger than the effective photocurrent. Therefore, a larger merging number of the MWMMFI system is not better. A large merging number may eventually cause the effective signal to drown in the noisy signal so that the subsequent cumulative difference denoising cannot play a role.

4.2. Change in Mutual Intensity after Merging

Figure 13 shows the relationship between the normalized mutual intensity value and the change of the integral series. The reference value for the normalization of the mutual intensity is the mutual intensity value of the traditional method, that is, the position represented by the black dotted line in the figure is one.

It can be seen in Figure 13a that when the integral stage is relatively small, the mutual intensity of all MWMMFI systems is smaller than that of the traditional system. However, after about two or three integrations, the mutual intensity already exceeds that of the traditional system scheme. Additionally, based on the premise that the effective photocurrent signal is not drowned by noise, the larger the merging number is, the larger mutual intensity is. However, under the same integral stage, the increase in mutual intensity caused by changing the merging number from two to three is larger than that caused by changing the merging number from three to four. There is a ceiling effect. The reason for the existence of this ceiling effect is also limited by the aperture of the focusing lens. Referring to the conclusion of Section 4.1, when the merging number is too large, the benefits brought by the MWMMFI design will far outweigh the costs paid.

Figure 13b shows the local details. At this time, the mutual intensity values of the three merging numbers fit closely. Entanglement even occurs when the merging number is three and four. This is because the noise suppression effect is relatively poor when the integral stage is small. The presence of residuals leads to entanglement in the mutual intensity values. Therefore, for MWMMFI systems, larger integration stage should be used as much as possible.

4.3. Image Reconstruction Results

First, the case of a constant integral stage is studied. Figure 14 shows the influence of different merging numbers on the reconstructed image quality when the integral series is one under the influence of noise. Table 3 lists the quantitative evaluation of reconstructed images using the objective evaluation index PSNR.

Figure 14a is seriously affected by noise. This is because traditional designs lack noise suppression processes. The orthogonal signal performs the differential operation directly. Residuals will still cause the image quality to deteriorate. The fringe details in Figure 14b are completely indistinguishable. Its PSNR value is also the smallest of all the images.

In contrast to Figure 14b,c, the brightness is lower than that of Figure 14a, because zero-frequency sampling also requires multi-waveguide merging. The energy segmentation among multiple waveguides results in a low sampling value of zero frequency, which describes the overall brightness of the image. Although the brightness is lower, the noise is significantly weaker than that of traditional systems. This reflects the noise suppression effect of the MWMMFI design. The comparison between Figure 14b,c also shows that the scheme with a higher merging number is less noisy. The fringe on the right side of Figure 14c is basically completely distinguishable, with a better subjective visual effect and higher PSNR value.

The change in the reconstructed image with the integral stage is studied below. Figure 15 shows the reconstruction effect of the two merging number systems under different integral stages. Table 4 shows the PSNR values of MWMMFI system images with merging numbers of two and three under different integral stages. The integral stage of zero-frequency sampling is uniformly set to three.

It can be intuitively found from Figure 15 that with the increase in the integral series, noise is obviously eliminated. The smallest fringes can be fully resolved. This also shows that increasing the integration series can suppress noise as well as increasing the merging number. Considering the cost of increasing the merging number, the small merging number and the large integral stage should be considered as much as possible when designing a MWMMFI imaging system.

However, observing the variation trend of PSNR, the improvement in PSNR is slower and slower with the increase in the integral stage. This shows that the influence of noise on image reconstruction is basically eliminated. In order to improve the image reconstruction effect, it is necessary to consider other limiting factors besides energy and the signal-to-noise ratio. Compare Table 3 and Table 4. With a merging number of two, the MWMMFI design can increase PSNR up to about four times. With a merging number of three, the MWMMFI design can increase PSNR up to about 6.58 times.

5. Conclusions

This study proposes a MWMMFI system based on a novel non-uniform sampling time axis, aiming to solve the problem of the low signal-to-noise ratio in traditional integrated interferometric imaging systems in orbit environments. This paper firstly introduces the imaging link of the integrated interference imaging system. It lays the foundation for the subsequent simulation experiment. Then, the design method of the MWMMFI system is proposed. The design method consists of three parts: static design, dynamic design, and signal processing. The purpose of static design is to obtain the size of the merging block and the in-block merging matrix. The purpose of dynamic design is to obtain the time axis of multi-frame signal extraction. The purpose of signal processing is to suppress noise. Finally, the effectiveness of the MWMMFI system for improving the effective optical signal cumulant, improving the image signal-to-noise ratio, and suppressing noise is verified by simulation experiments. Simulation experiments indicate that compared to traditional systems, the MWMMFI design can increase the peak signal-to-noise ratio up to 4 times when the merging number is two, and up to 6.58 times when the merging number is three. This provides both technical insights and theoretical guidance for addressing the issue of enhancing the SNR of integrated interferometric imaging systems in orbit.

Author Contributions

Conceptualization, C.W. and C.L.; methodology, C.W. and H.H.; software, C.W., H.H. and S.Y.; data curation, C.W. and Y.D.; writing—original draft preparation, C.W.; writing—review and editing, H.H., C.L., S.Y. and Q.G.; funding acquisition, C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant 62175236.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank the anonymous reviewers for their valuable comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kendrick, R.L.; Duncan, A.; Ogden, C.; Wilm, J.; Stubbs, D.M.; Thurman, S.T.; Su, T.; Scott, R.P.; Yoo, S.J.B. Flat-panel space-based space surveillance sensor. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, HI, USA, 10–13 September 2013; pp. 10–13. [Google Scholar]
  2. Kendrick, R.L.; Duncan, A.; Ogden, C.; Wilm, J.; Thurman, S.T. Segmented Planar Imaging Detector for EO Reconnaissance. In Proceedings of the Imaging and Applied Optics, Arlington, VI, USA, 23–27 June 2013; p. CM4C.1. [Google Scholar]
  3. Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Wuchenich, D.; Ogden, C. System Design for a SPIDER Imager. In Proceedings of the Frontiers in Optics, San Jose, CA, USA, 18–22 October 2015; p. FM3E.3. [Google Scholar]
  4. Scott, R.P.; Su, T.; Ogden, C.; Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Yu, R.; Yoo, S.J.B. Demonstration of a photonic integrated circuit for multi-baseline interferometric imaging. In Proceedings of the 2014 IEEE Photonics Conference, San Diego, CA, USA, 12–16 October 2014. [Google Scholar]
  5. Su, T.; Scott, R.P.; Ogden, C.; Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Yu, R.; Yoo, S.J.B. Experimental demonstration of interferometric imaging using photonic integrated circuits. Opt. Express 2017, 25, 12653–12665. [Google Scholar] [CrossRef] [PubMed]
  6. Su, T.; Liu, G.; Badham, K.E.; Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Yoo, S.J.B. Interferometric imaging using Si3N4 photonic integrated circuits for a SPIDER imager. Opt. Express 2018, 26, 12801–12812. [Google Scholar] [CrossRef] [PubMed]
  7. Yu, Q.; Wu, D.; Chen, F.; Sun, S. Design of a wide-field target detection and tracking system using the segmented planar imaging detector for electro-optical reconnaissance. Chin. Opt. Lett. 2018, 16, 071101. [Google Scholar] [CrossRef]
  8. Guo-Mian, L.; Qi, L.; Yue-Ting, C.; Hua-Jun, F.; Zhi-Hai, X.; Jingjing, M. An improved scheme and numerical simulation of segmented planar imaging detector for electro-optical reconnaissance. Opt. Rev. 2019, 26, 664–675. [Google Scholar] [CrossRef]
  9. Ge, B.; Yu, Q.; Chen, J.; Sun, S. Passive 3D Imaging Method Based on Photonics Integrated Interference Computational Imaging System. Remote Sens. 2023, 15, 2333. [Google Scholar] [CrossRef]
  10. Yu, Q.; Ge, B.; Li, Y.; Yue, Y.; Chen, F.; Sun, S. System design for a “checkerboard” imager. Appl. Opt. 2018, 57, 10218–10223. [Google Scholar] [CrossRef] [PubMed]
  11. Chen, J.; Yu, Q.; Ge, B.; Zhang, C.; He, Y.; Sun, S. A Phase Difference Measurement Method for Integrated Optical Interferometric Imagers. Remote Sens. 2023, 15, 2194. [Google Scholar] [CrossRef]
  12. Gao, W.P.; Wang, X.R.; Ma, L.; Yuan, Y.; Guo, D.F. Quantitative analysis of segmented planar imaging quality based on hierarchical multistage sampling lens array. Opt. Express 2019, 27, 7955–7967. [Google Scholar] [CrossRef] [PubMed]
  13. Gao, W.; Yuan, Y.; Wang, X.-R.; Ma, L.; Zhao, Z.; Yuan, H. Quantitative analysis and optimization design of the segmented planar integrated optical imaging system based on an inhom*ogeneous multistage sampling lens array. Opt. Express 2021, 29, 11869–11884. [Google Scholar] [CrossRef]
  14. Deng, X.; Tao, W.; Diao, Y.; Sang, B.; Sha, W. Imaging Analysis of Photonic Integrated Interference Imaging System Based on Compact Sampling Lenslet Array Considering On-Chip Optical Loss. Photonics 2023, 10, 797. [Google Scholar] [CrossRef]
  15. Ding, C.; Zhang, X.; Liu, X.; Meng, H.; Xu, M. Structure design and image reconstruction of hexagonal-array photonics integrated interference imaging system. IEEE Access 2020, 8, 139396–139403. [Google Scholar] [CrossRef]
  16. Hu, H.; Liu, C.; Zhang, Y.; Feng, Q.; Liu, S. Optimal design of segmented planar imaging for dense azimuthal sampling lens array. Opt. Express 2021, 29, 24300–24314. [Google Scholar] [CrossRef] [PubMed]
  17. Liu, G.; Wen, D.S.; Song, Z.X. System design of an optical interferometer based on compressive sensing. Mon. Not. R. Astron. Soc. 2018, 478, 2065–2073. [Google Scholar] [CrossRef]
  18. Liu, G.; Wen, D.; Song, Z.; Jiang, T. System design of an optical interferometer based on compressive sensing: An update. Opt. Express 2020, 28, 19349–19361. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, G.; Wen, D.; Fan, W.; Song, Z.; Sun, Z. Fully connected aperture array design of the segmented planar imaging system. Opt. Lett. 2022, 47, 4596–4599. [Google Scholar] [CrossRef] [PubMed]
  20. Debary, H.; Mugnier, L.M.; Michau, V. Aperture configuration optimization for extended scene observation by an interferometric telescope. Opt. Lett. 2022, 47, 4056–4059. [Google Scholar] [CrossRef]
  21. Chen, T.B.; Zeng, X.F.; Bai, Y.Y. Image reconstruction of photonics integrated interference imaging: Entropy prior. Acta Opt. Sin. 2021, 41, 2311002. [Google Scholar]
  22. Chen, T.; Zeng, X.; Zhang, Z.; Zhang, F.; Bai, Y.; Zhang, X. REM: A simplified revised entropy image reconstruction for photonics integrated interference imaging system. Opt. Commun. 2021, 501, 127341. [Google Scholar] [CrossRef]
  23. Ziran, Z.; Guomian, L.; Huajun, F.; Zhihai, X.; Qi, L.; Hao, Z.; Yueting, C. Analysis of Signal Energy and Noise in Photonic Integrated Interferometric Imaging System. Acta Opt. Sin. 2022, 42, 1311001. [Google Scholar]
  24. Lepage, G.; Bogaerts, J.; Meynants, G. Time-Delay-Integration Architectures in CMOS Image Sensors. IEEE Trans. Electron Devices 2009, 56, 2524–2533. [Google Scholar] [CrossRef]
  25. Thompson, A.R.; Moran, J.M.; Swenson, G.W., Jr.; Thompson, A.R.; Moran, J.M.; Swenson, G.W. Van Cittert–Zernike theorem, spatial coherence, and scattering. In Interferometry and Synthesis in Radio Astronomy; Springer: Cham, Switzerland, 2017; pp. 767–786. [Google Scholar]
  26. Toyoshima, M. Maximum fiber coupling efficiency and optimum beam size in the presence of random angular jitter for free-space laser systems and their applications. J. Opt. Soc. Am. A 2006, 23, 2246–2250. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, D.; Yu, Q.; Yue, Y.; Chen, F.C. Study of segmented planar imaging detector for electro-optical reconnaissance. Infrared 2018, 39, 1–6. [Google Scholar]

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (1)

Figure 1. Full link diagram of the integrated interference imaging system.

Figure 1. Full link diagram of the integrated interference imaging system.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (2)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (3)

Figure 2. Schematic diagram of light field transfer in an integrated interference imaging system.

Figure 2. Schematic diagram of light field transfer in an integrated interference imaging system.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (4)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (5)

Figure 3. Image point trajectory comparison between the TDI CMOS and integrated interference imaging system (IIIS). The different color blocks of the tilted interference arm represent the optical waveguide arrays corresponding to different microlenses.

Figure 3. Image point trajectory comparison between the TDI CMOS and integrated interference imaging system (IIIS). The different color blocks of the tilted interference arm represent the optical waveguide arrays corresponding to different microlenses.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (6)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (7)

Figure 4. Diagram of the focusing spot in the optical waveguide array of the MWMMFI system.

Figure 4. Diagram of the focusing spot in the optical waveguide array of the MWMMFI system.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (8)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (9)

Figure 5. Diagram of the decision region to be traversed.

Figure 5. Diagram of the decision region to be traversed.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (10)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (11)

Figure 6. The traversal process of the static design when the merging numbers are 3 and 4. (a) Traversal when the merging number is 3; (b) traversal when the merging number is 4.

Figure 6. The traversal process of the static design when the merging numbers are 3 and 4. (a) Traversal when the merging number is 3; (b) traversal when the merging number is 4.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (12)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (13)

Figure 7. Static design results for different merging numbers.

Figure 7. Static design results for different merging numbers.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (14)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (15)

Figure 8. The square optical waveguide array has symmetry.

Figure 8. The square optical waveguide array has symmetry.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (16)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (17)

Figure 9. Design scheme for the sampling time and single frame integration time with different interferometer arm placements. (a) Row and column transition timeline for θ a = 0 . Within the same color block, the coordinates are invariant. (b) Row and column transition timeline for θ a = 45 . (c) Row and column transition timeline for the general case.

Figure 9. Design scheme for the sampling time and single frame integration time with different interferometer arm placements. (a) Row and column transition timeline for θ a = 0 . Within the same color block, the coordinates are invariant. (b) Row and column transition timeline for θ a = 45 . (c) Row and column transition timeline for the general case.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (18)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (19)

Figure 10. Relationship between the merging block and the optical waveguide array coordinate system.

Figure 10. Relationship between the merging block and the optical waveguide array coordinate system.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (20)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (21)

Figure 11. Simulation experiment process for the MWMMFI system.

Figure 11. Simulation experiment process for the MWMMFI system.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (22)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (23)

Figure 12. The original image used in the simulation experiment.

Figure 12. The original image used in the simulation experiment.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (24)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (25)

Figure 13. Effects of the merging number and integration stage on mutual intensity. (a) When the integral stage is from 1 to 50; (b) Detail figure for low integral stage.

Figure 13. Effects of the merging number and integration stage on mutual intensity. (a) When the integral stage is from 1 to 50; (b) Detail figure for low integral stage.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (26)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (27)

Figure 14. The results and details of image reconstruction are obtained when the integral stage is 1 and the number of merges is different. (a) The merging number is 1 (traditional SPIDER system); (b) the merging number is 2 (the MWMMFI system); and (c) the merging number is 3 (the MWMMFI system).

Figure 14. The results and details of image reconstruction are obtained when the integral stage is 1 and the number of merges is different. (a) The merging number is 1 (traditional SPIDER system); (b) the merging number is 2 (the MWMMFI system); and (c) the merging number is 3 (the MWMMFI system).

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (28)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (29)

Figure 15. The reconstruction effects of the two merging number systems under different integral stages. (ad) When the integration stage is 10, 20, 30, and 40, the reconstructed image of the system with the merging number is 2. (eh) When the integration stage is 10, 20, 30, and 40, the reconstructed image of the system with the merging number is 3.

Figure 15. The reconstruction effects of the two merging number systems under different integral stages. (ad) When the integration stage is 10, 20, 30, and 40, the reconstructed image of the system with the merging number is 2. (eh) When the integration stage is 10, 20, 30, and 40, the reconstructed image of the system with the merging number is 3.

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (30)

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (31)

Table 1. Environment parameter settings.

Table 1. Environment parameter settings.

ParameterValue
Altitude of orbit200 km
Relative moving speed10 km/s
Number of PICs37
Longest baseline length138 mm
Number of microlenses in a single arm138
Fill factor1
Diameter of the microlens1 mm
Optical waveguide element size 3 μ m
Focal length of the microlens10 mm (C = 2)
14 mm (C = 3)
18 mm (C = 4)
Wavelength800 nm
Image size512 pixel × 512 pixel
Microlens pairing methodEnd-to-end pairing
Responsivity46 A/W
Quantum efficiency0.75
Fiber coupling loss6 db
PIC propagation loss15 db
Number of AWG spectral channels40
Dark current0.05 nA
Load resistance50 Ω
Multiplication factor100
Thermal control temperature300 K

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (32)

Table 2. The normalized signal and noise values for a single avalanche photodiode.

Table 2. The normalized signal and noise values for a single avalanche photodiode.

Merging NumberNormalized Signal ValueNormalized Noise RMS Value
11.00000.0749
20.15730.0262
30.09120.0172
40.05930.0148

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (33)

Table 3. PSNR of reconstructed images with integral stage of 1 and different merging numbers.

Table 3. PSNR of reconstructed images with integral stage of 1 and different merging numbers.

IndexMerging NumberPSNR
(a)12.1354
(b)24.6675
(c)36.8700

Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (34)

Table 4. PSNR values of MWMMFI system images with merging numbers of 2 and 3 under dif-ferent integral stages.

Table 4. PSNR values of MWMMFI system images with merging numbers of 2 and 3 under dif-ferent integral stages.

IndexMerging NumberIntegral StagePSNR
(a)2105.3730
(b)2207.2245
(c)2308.1186
(d)2408.5577
(e)3109.2292
(f)32010.9416
(g)33012.8910
(h)34014.0408

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.


© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Design of Multi-Waveguide Merged Multi-Frame Integration System for Integrated Interference Imaging System (2024)

References

Top Articles
Latest Posts
Article information

Author: Melvina Ondricka

Last Updated:

Views: 5800

Rating: 4.8 / 5 (48 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Melvina Ondricka

Birthday: 2000-12-23

Address: Suite 382 139 Shaniqua Locks, Paulaborough, UT 90498

Phone: +636383657021

Job: Dynamic Government Specialist

Hobby: Kite flying, Watching movies, Knitting, Model building, Reading, Wood carving, Paintball

Introduction: My name is Melvina Ondricka, I am a helpful, fancy, friendly, innocent, outstanding, courageous, thoughtful person who loves writing and wants to share my knowledge and understanding with you.