As high-dynamics and weak-signal are of two primary concerns of navigation using Global Navigation Satellite System (GNSS) signals, an acquisition algorithm based on three-time fractional Fourier transform (FRFT) is presented to simplify the calculation effectively. Firstly, the correlation results similar to linear frequency modulated (LFM) signals are derived on the basis of the high dynamic GNSS signal model. Then, the principle of obtaining the optimum rotation angle is analyzed, which is measured by FRFT projection lengths with two selected rotation angles. Finally, Doppler shift, Doppler rate, and code phase are accurately estimated in a real-time and low signal to noise ratio (SNR) wireless communication system. The theoretical analysis and simulation results show that the fast FRFT algorithm can accurately estimate the high dynamic parameters by converting the traditional two-dimensional search process to only three times FRFT. While the acquisition performance is basically the same, the computational complexity and running time are greatly reduced, which is more conductive to practical application.
In real-time strategy (RTS) games, the ability of recognizing other players’ goals is important for creating artifical intelligence (AI) players. However, most current goal recognition methods do not take the player ’s deceptive behavior into account which often occurs in RTS game scenarios, resulting in poor recognition results. In order to solve this problem, this paper proposes goal recognition for deceptive agent, which is an extended goal recognition method applying the deductive reason method (from general to special) to model the deceptive agent’s behavioral strategy. First of all, the general deceptive behavior model is proposed to abstract features of deception, and then these features are applied to construct a behavior strategy that best matches the deceiver’s historical behavior data by the inverse reinforcement learning (IRL) method. Final, to interfere with the deceptive behavior implementation, we construct a game model to describe the confrontation scenario and the most effective interference measures.
It is essential to maximize capacity while satisfying the transmission time delay of unmanned aerial vehicle (UAV) swarm communication system. In order to address this challenge, a dynamic decentralized optimization mechanism is presented for the realization of joint spectrum and power (JSAP) resource allocation based on deep Q-learning networks (DQNs). Each UAV to UAV (U2U) link is regarded as an agent that is capable of identifying the optimal spectrum and power to communicate with one another. The convolutional neural network, target network, and experience replay are adopted while training. The findings of the simulation indicate that the proposed method has the potential to improve both communication capacity and probability of successful data transmission when compared with random centralized assignment and multichannel access methods.
An efficient and real-time simulation method is proposed for the dynamic electromagnetic characteristics of cluster targets to meet the requirements of engineering practical applications. First, the coordinate transformation method is used to establish a geometric model of the observation scene, which is described by the azimuth angles and elevation angles of the radar in the target reference frame and the attitude angles of the target in the radar reference frame. Then, an approach for dynamic electromagnetic scattering simulation is proposed. Finally, a fast-computing method based on sparsity in the time domain, space domain, and frequency domain is proposed. The method analyzes the sparsity-based dynamic scattering characteristic of the typical cluster targets. The error between the sparsity-based method and the benchmark is small, proving the effectiveness of the proposed method.
The existing recognition algorithms of space-time block code (STBC) for multi-antenna (MA) orthogonal frequency-division multiplexing (OFDM) systems use feature extraction and hypothesis testing to identify the signal types in a complex communication environment. However, owing to the restrictions on the prior information and channel conditions, these existing algorithms cannot perform well under strong interference and non-cooperative communication conditions. To overcome these defects, this study introduces deep learning into the STBC-OFDM signal recognition field and proposes a recognition method based on the fourth-order lag moment spectrum (FOLMS) and attention-guided multi-scale dilated convolution network (AMDCNet). The fourth-order lag moment vectors of the received signals are calculated, and vectors are stitched to form two-dimensional FOLMS, which is used as the input of the deep learning-based model. Then, the multi-scale dilated convolution is used to extract the details of images at different scales, and a convolutional block attention module (CBAM) is introduced to construct the attention-guided multi-scale dilated convolution module (AMDCM) to make the network be more focused on the target area and obtian the multi-scale guided features. Finally, the concatenate fusion, residual block and fully-connected layers are applied to acquire the STBC-OFDM signal types. Simulation experiments show that the average recognition probability of the proposed method at ?12 dB is higher than 98%. Compared with the existing algorithms, the recognition performance of the proposed method is significantly improved and has good adaptability to environments with strong disturbances. In addition, the proposed deep learning-based model can directly identify the pre-processed FOLMS samples without a priori information on channel and noise, which is more suitable for non-cooperative communication systems than the existing algorithms.
Neuromorphic computing simulates the operation of biological brain function for information processing and can potentially solve the bottleneck of the Von Neumann architecture. Inspired by the real characteristics of physical memristive devices, we propose a threshold-type nonlinear voltage-controlled memristor mathematical model which is used to design a novel memristor-based crossbar array. The presented crossbar array can simulate the synaptic weight in real number field rather than only positive number field. Theoretical analysis and simulation results of a 2×2 image inversion operation validate the feasibility of the proposed crossbar array and the necessary training and inference functions. Finally, the presented crossbar array is used to construct the neural network and then applied in the handwritten digit recognition. The Mixed National Institute of Standards and Technology (MNIST) database is adopted to train this neural network and it achieves a satisfactory accuracy.
In order to simulate metamaterial rotational symmetric open region problems, unconditionally stable perfectly match layer (PML) implementation is proposed in the body of revolution (BOR) finite-difference time-domain (FDTD) lattice. More precisely, the proposed algorithm is implemented by the Crank-Nicolson (CN) Douglas-Gunn (DG) procedure for BOR metamaterial simulation. The constitutive relationship of metamaterial can be expressed by the Drude model and calculated by the piecewise linear recursive convolution (PLRC) approach. The effectiveness including absorption, efficiency, and accuracy is demonstrated through the numerical example. It can be concluded that the proposed implementation is to take the advantages of the CNDG-PML procedure, PLRC approach, and BOR-FDTD algorithm in terms of considerable accuracy, enhanced absorption and remarkable efficiency. Meanwhile, it can be demonstrated that the proposed scheme can maintain its unconditional stability when the time step exceeds the Courant-Friedrichs-Levy (CFL) condition.
Channel estimation has been considered as a key issue in the millimeter-wave (mmWave) massive multi-input multi-output (MIMO) communication systems, which becomes more challenging with a large number of antennas. In this paper, we propose a deep learning (DL)-based fast channel estimation method for mmWave massive MIMO systems. The proposed method can directly and effectively estimate channel state information (CSI) from received data without performing pilot signals estimate in advance, which simplifies the estimation process. Specifically, we develop a convolutional neural network (CNN)-based channel estimation network for the case of dimensional mismatch of input and output data, subsequently denoted as channel (H) neural network (HNN). It can quickly estimate the channel information by learning the inherent characteristics of the received data and the relationship between the received data and the channel, while the dimension of the received data is much smaller than the channel matrix. Simulation results show that the proposed HNN can gain better channel estimation accuracy compared with existing schemes.
This paper develops a deep estimator framework of deep convolution networks (DCNs) for super-resolution direction of arrival (DOA) estimation. In addition to the scenario of correlated signals, the quantization errors of the DCN are the major challenge. In our deep estimator framework, one DCN is used for spectrum estimation with quantization errors, and the remaining two DCNs are used to estimate quantization errors. We propose training our estimator using the spatial sampled covariance matrix directly as our deep estimator’s input without any feature extraction operation. Then, we reconstruct the original spatial spectrum from the spectrum estimate and quantization errors estimate. Also, the feasibility of the proposed deep estimator is analyzed in detail in this paper. Once the deep estimator is appropriately trained, it can recover the correlated signals’ spatial spectrum fast and accurately. Simulation results show that our estimator performs well in both resolution and estimation error compared with the state-of-the-art algorithms.
A two-dimensional directional modulation (DM) technology with dual-mode orbital angular momentum (OAM) beam is proposed for physical-layer security of the relay unmanned aerial vehicle (UAV) tracking transmission. The elevation and azimuth of the vortex beam are modulated into the constellation, which can form the digital waveform with the encoding modulation. Since the signal is direction-dependent, the modulated waveform is purposely distorted in other directions to offer a security technology. Two concentric uniform circular arrays (UCAs) with different radii are excited to generate dual vortex beams with orthogonality for the composite signal, which can increase the demodulation difficulty. Due to the phase propagation characteristics of vortex beam, the constellation at the desired azimuth angle will change continuously within a wavelength. A desired single antenna receiver can use the propagation phase compensation and an opposite helical phase factor for the signal demodulation in the desired direction. Simulations show that the proposed OAM-DM scheme offers a security approach with direction sensitivity transmission.
In this work, the multi-fidelity (MF) simulation driven Bayesian optimization (BO) and its advanced form are proposed to optimize antennas. Firstly, the multiple objective targets and the constraints are fused into one comprehensive objective function, which facilitates an end-to-end way for optimization. Then, to increase the efficiency of surrogate construction, we propose the MF simulation-based BO (MFBO), of which the surrogate model using MF simulation is introduced based on the theory of multi-output Gaussian process. To further use the low-fidelity (LF) simulation data, the modified MFBO (M-MFBO) is subsequently proposed. By picking out the most potential points from the LF simulation data and re-simulating them in a high-fidelity (HF) way, the M-MFBO has a possibility to obtain a better result with negligible overhead compared to the MFBO. Finally, two antennas are used to testify the proposed algorithms. It shows that the HF simulation-based BO (HFBO) outperforms the traditional algorithms, the MFBO performs more effectively than the HFBO, and sometimes a superior optimization result can be achieved by reusing the LF simulation data.
Micro-Doppler feature extraction of unmanned aerial vehicles (UAVs) is important for their identification and classification. Noise and the motion state of the UAV are the main factors that may affect feature extraction and estimation precision of the micro-motion parameters. The spectrum of UAV echoes is reconstructed to strengthen the micro-motion feature and reduce the influence of the noise on the condition of low signal to noise ratio (SNR). Then considering the rotor rate variance of UAV in the complex motion state, the cepstrum method is improved to extract the rotation rate of the UAV, and the blade length can be intensively estimated. The experiment results for the simulation data and measured data show that the reconstruction of the spectrum for the UAV echoes is helpful and the relative mean square root error of the rotating speed and blade length estimated by the proposed method can be improved. However, the computation complexity is higher and the heavier computation burden is required.
In this paper, we introduce an incident angle based fusion method for radar and infrared sensors to improve the recognition rate of complex targets under half space scenarios, e.g., vehicles on the ground in this paper. For radar sensors, convolutional operation is introduced into the autoencoder, a “winner-take-all (WTA)” convolutional autoencoder (CAE) is used to improve the recognition rate of the radar high resolution range pro?le (HRRP). Moreover, different from the free space, the HRRP in half space is more complex. In order to get closer to the real situation, the half space HRRP is simulated as the dataset. The recognition rate has a growth more than 7% compared with the traditional CAE or denoised sparse autoencoder (DSAE). For infrared sensor, a convolutional neural network (CNN) is used for infrared image recognition. Finally, we combine the two results with the Dempster-Shafer (D-S) evidence theory, and the discounting operation is introduced in the fusion to improve the recognition rate. The recognition rate after fusion has a growth more than 7% compared with a single sensor. After the discounting operation, the accuracy rate has been improved by 1.5%, which validates the effectiveness of the proposed method.
Specific to the reflected light problem on the surface of transparent body, the polarization characteristics of the reflection region are analyzed, and a polarization characterization model combining the reflection and transmission effects is established. On the basis of the polarization characteristic analysis, the minimum value of normalized cross-correlation (NCC) coefficient between transmission and reflection images is solved through the gradient descent method, and their polarization degrees under the minimum correlation are acquired. According to the distribution relations of the transmitted and reflected lights in perpendicular and parallel directions, reflection separation is realized via the polarized orthogonality difference algorithm based on the degree of reflection polarization and the degree of transmission polarization.
Partial label learning aims to learn a multi-class classifier, where each training example corresponds to a set of candidate labels among which only one is correct. Most studies in the label space have only focused on the difference between candidate labels and non-candidate labels. So far, however, there has been little discussion about the label correlation in the partial label learning. This paper begins with a research on the label correlation, followed by the establishment of a unified framework that integrates the label correlation, the adaptive graph, and the semantic difference maximization criterion. This work generates fresh insight into the acquisition of the learning information from the label space. Specifically, the label correlation is calculated from the candidate label set and is utilized to obtain the similarity of each pair of instances in the label space. After that, the labeling confidence for each instance is updated by the smoothness assumption that two instances should be similar outputs in the label space if they are close in the feature space. At last, an effective optimization program is utilized to solve the unified framework. Extensive experiments on artificial and real-world data sets indicate the superiority of our proposed method to state-of-art partial label learning methods.
This paper considers multi-frequency passive radar and develops a multi-frequency joint direction of arrival (DOA) estimation algorithm to improve estimation accuracy and resolution. The developed algorithm exploits the sparsity of targets in the spatial domain. Specifically, we first extract the required frequency channel data and acquire the snapshot data through a series of preprocessing such as clutter suppression, coherent integration, beamforming, and constant false alarm rate (CFAR) detection. Then, based on the framework of sparse Bayesian learning, the target’s DOA is estimated by jointly extracting the multi-frequency data via evidence maximization. Simulation results show that the developed algorithm has better estimation accuracy and resolution than other existing multi-frequency DOA estimation algorithms, especially under the scenarios of low signal-to-noise ratio (SNR) and small snapshots. Furthermore, the effectiveness is verified by the field experimental data of a multi-frequency FM-based passive radar.
Various types of interference signals limit the practical application of transform domain communication systems (TDCSs) in the severe electromagnetic field, an orthogonal basis learning method of transformation analysis (OBL-TA) is proposed to effectively address the problem of obtaining an optimal transform domain based on sparse representation. Then, the sparse availability is utilized to obtain the optimal transformation analysis by the iterative methods, which yields the sparse representation for transform domain (SRTD) in unrestricted form. In addition, the iterative version of SRTD (I-SRTD) in unrestricted form is obtained by decomposing the SRTD problem into three sub-problems and each sub-problem is iteratively solved by learning the best orthogonal basis. Furthermore, orthogonal basis learning via cost function minimization process is conducted by stochastic descent, which is assured to converge to a local minimum at least. Finally, the optimal transformation analysis is developed by the effectiveness of different transform domains according to the accuracy of the sparse representation and an optimal transformation analysis separately (OPTAS) is applied to the synthesized signal forms with conic alternatives, dualization, and smoothing. Simulation results demonstrate that the superiorities of the proposed methods achieve the optimal recovery and separation more rapidly and accurately than conventional methods.
High-precision filtering estimation is one of the key techniques for strapdown inertial navigation system/global navigation satellite system (SINS/GNSS) integrated navigation system, and its estimation plays an important role in the performance evaluation of the navigation system. Traditional filter estimation methods usually assume that the measurement noise conforms to the Gaussian distribution, without considering the influence of the pollution introduced by the GNSS signal, which is susceptible to external interference. To address this problem, a high-precision filter estimation method using Gaussian process regression (GPR) is proposed to enhance the prediction and estimation capability of the unscented quaternion estimator (USQUE) to improve the navigation accuracy. Based on the advantage of the GPR machine learning function, the estimation performance of the sliding window for model training is measured. This method estimates the output of the observation information source through the measurement window and realizes the robust measurement update of the filter. The combination of GPR and the USQUE algorithm establishes a robust mechanism framework, which enhances the robustness and stability of traditional methods. The results of the trajectory simulation experiment and SINS/GNSS car-mounted tests indicate that the strategy has strong robustness and high estimation accuracy, which demonstrates the effectiveness of the proposed method.
This paper introduces the time-frequency analyzed long short-term memory (TF-LSTM) neural network method for jamming signal recognition over the Global Navigation Satellite System (GNSS) receiver. The method introduces the long short-term memory (LSTM) neural network into the recognition algorithm and combines the time-frequency (TF) analysis for signal preprocessing. Five kinds of navigation jamming signals including white Gaussian noise (WGN), pulse jamming, sweep jamming, audio jamming, and spread spectrum jamming are used as input for training and recognition. Since the signal parameters and quantity are unknown in the actual scenario, this work builds a data set containing multiple kinds and parameters jamming to train the TF-LSTM. The performance of this method is evaluated by simulations and experiments. The method has higher recognition accuracy and better robustness than the existing methods, such as LSTM and the convolutional neural network (CNN).
In this paper, an importance sampling maximum likelihood (ISML) estimator for direction-of-arrival (DOA) of incoherently distributed (ID) sources is proposed. Starting from the maximum likelihood estimation description of the uniform linear array (ULA), a decoupled concentrated likelihood function (CLF) is presented. A new objective function based on CLF which can obtain a closed-form solution of global maximum is constructed according to Pincus theorem. To obtain the optimal value of the objective function which is a complex high-dimensional integral, we propose an importance sampling approach based on Monte Carlo random calculation. Next, an importance function is derived, which can simplify the problem of generating random vector from a high-dimensional probability density function (PDF) to generate random variable from a one-dimensional PDF. Compared with the existing maximum likelihood (ML) algorithms for DOA estimation of ID sources, the proposed algorithm does not require initial estimates, and its performance is closer to Cramer-Rao lower bound (CRLB). The proposed algorithm performs better than the existing methods when the interval between sources to be estimated is small and in low signal to noise ratio (SNR) scenarios.
To improve the error correction performance, an innovative encoding structure with tail-biting for spinal codes is designed. Furthermore, an adaptive forward stack decoding (A-FSD) algorithm with lower complexity for spinal codes is proposed. In the A-FSD algorithm, a flexible threshold parameter is set by a variable channel state to narrow the scale of nodes accessed. On this basis, a new decoding method of AFSD with early termination (AFSD-ET) is further proposed. The AFSD-ET decoder not only has the ability of dynamically modifying the number of stored nodes, but also adopts the early termination criterion to curtail complexity. The complexity and related parameters are verified through a series of simulations. The simulation results show that the proposed spinal codes with tail-biting and the AFSD-ET decoding algorithms can reduce the complexity and improve the decoding rate without sacrificing correct decoding performance.
With the development of technology, the relevant performance of unmanned aerial vehicles (UAVs) has been greatly improved, and various highly maneuverable UAVs have been developed, which puts forward higher requirements on target tracking technology. Strong maneuvering refers to relatively instantaneous and dramatic changes in target acceleration or movement patterns, as well as continuous changes in speed, angle, and acceleration. However, the traditional UAV tracking algorithm model has poor adaptability and large amount of calculation. This paper applies support vector regression (SVR) to the interacting multiple model (IMM) algorithm. The simulation results show that the improved algorithm has higher tracking accuracy for highly maneuverable targets than the original algorithm, and can adjust parameters adaptively, making it more adaptable.
This paper presents a co-time co-frequency full-duplex (CCFD) massive multiple-input multiple-output (MIMO) system to meet high spectrum efficiency requirements for beyond the fifth-generation (5G) and the forthcoming the sixth-generation (6G) networks. To achieve equilibrium of energy consumption, system resource utilization, and overall transmission capacity, an energy-efficient resource management strategy concerning power allocation and antenna selection is designed. A continuous quantum-inspired termite colony optimization (CQTCO) algorithm is proposed as a solution to the resource management considering the communication reliability while promoting energy conservation for the CCFD massive MIMO system. The effectiveness of CQTCO compared with other algorithms is evaluated through simulations. The results reveal that the proposed resource management scheme under CQTCO can obtain a superior performance in different communication scenarios, which can be considered as an eco-friendly solution for promoting reliable and efficient communication in future wireless networks.
An approach for joint direction of arrival (DOA) angle and frequency estimation for a linear array is investigated in this paper. Specifically, we make the utmost of the autocorrelation and cross-correlation information to propose an extended DOA-matrix (EDOAM) method. Subsequently, we obtain the auto-paired angle and frequency estimates by the eigenvalues and the corresponding eigenvectors of the novel DOA matrix. Furthermore, the proposed method surpasses the DOA-matrix method which partly ignores the autocorrelation and cross-correlation information. Finally, the proposed method works well for both uniform and non-uniform linear arrays. The simulation consequences indicate the superiority of our proposed approach.
In some tracking applications, due to the sensor characteristic, only range measurements are available. If this is the case, due to the lack of full position measurements, the observability of Cartesian states (e.g., position and velocity) are limited to particular cases. For general cases, the range measurements can be utilized by developing a state estimation algorithm in range-Doppler (R-D) plane to obtain accurate range and Doppler estimates. In this paper, a state estimation method based on the proper dynamic model in the R-D plane is proposed. The unscented Kalman filter is employed to handle the strong nonlinearity in the dynamic model. Two filtering initialization methods are derived to extract the initial state estimate and the initial covariance in the R-D plane from the first several range measurements. One is derived based on the well-known two-point differencing method. The other incorporates the correct dynamic model information and uses the unscented transformation method to obtain the initial state estimates and covariance, resulting in a model-based method, which capitalizes the model information to yield better performance. Monte Carlo simulation results are provided to illustrate the effectiveness and superior performance of the proposed state estimation and filter initialization methods.
The scattering centers (SCs) of low-detectable targets (LDTs) have a low scattering intensity. It is difficult to build the SC model of an LDT using the existing methods because these methods mainly concern dominant SCs with strong scattering contributions. This paper presents an SC modeling approach to acquire the weak SCs of LDTs. We employ the induced currents at the LDT to search SCs, and the joint time-frequency transform together with the Hough transform to separate the scattering contributions of different SCs. Particle swarm optimization (PSO) is applied to improve the estimation results of SCs. The accuracy of the SC model built by this approach is verified by a full-wave numerical method. The validation results show that the SC model of the LDT can precisely simulate the signatures of high-resolution images, such as high-resolution range profile and inverse synthetic aperture radar (ISAR) images.
Radio frequency fingerprinting (RFF) is a technology that identifies the specific emitter of a received electromagnetic signal by external measurement of the minuscule hardware-level, device-specific imperfections. The RFF-related information is mainly in the form of unintentional modulation (UIM), which is subtle enough to be effectively imperceptible and is submerged in the intentional modulation (IM). It is necessary to minimize the influence of the IM and expand the slight differences between emitters for successful RFF. This paper proposes a UIM microstructure enlargement (UMME) method based on feature-level adaptive signal decomposition (ASD), accompanied by autocorrelation and cross-correlation analysis. The common IM part is evaluated by analyzing a newly-defined benchmark feature. Three different indexes are used to quantify the similarity, distance, and dependency of the RFF features from different devices. Experiments are conducted based on the real-world signals transmitted from 20 of the same type of radar in the same working mode. The visual image qualitatively shows the magnification of feature differences; different indicators quantitatively describe the changes in features. Compared with the original RFF feature, recognition results based on the Gaussian mixture model (GMM) classifier further validate the effectiveness of the proposed algorithm.
In this paper, we first propose a memristive chaotic system and implement it by circuit simulation. The chaotic dynamics and various attractors are analysed by using phase portrait, bifurcation diagram, and Lyapunov exponents. In particular, the system has robust chaos in a wide parameter range and the initial value space, which is favourable to the security communication application. Consequently, we further explore its application in image encryption and present a new scheme. Before image processing, the external key is protected by the Grain-128a algorithm and the initial values of the memristive system are updated with the plain image. We not only perform random pixel extraction and masking with the chaotic cipher, but also use them as control parameters for Brownian motion to obtain the permutation matrix. In addition, multiplication on the finite field GF(28) is added to further enhance the cryptography. Finally, the simulation results verify that the proposed image encryption scheme has better performance and higher security, which can effectively resist various attacks.
In this paper, we propose a joint waveform selection and power allocation (JWSPA) strategy based on chance-constraint programming (CCP) for manned/unmanned aerial vehicle hybrid swarm (M/UAVHS) tracking a single target. Accordingly, the low probability of intercept (LPI) performance of system can be improved by collaboratively optimizing transmit power and waveform. For target radar cross section (RCS) prediction, we design a random RCS prediction model based on electromagnetic simulation (ES) of target. For waveform selection, we build a waveform library to adaptively manage the frequency modulation slope and pulse width of radar waveform. For power allocation, the CCP is employed to balance tracking accuracy and power resource. The Bayesian Cramér-Rao lower bound (BCRLB) is adopted as a criterion to measure target tracking accuracy. The hybrid intelligent algorithms, in which the stochastic simulation is integrated into the genetic algorithm (GA), are used to solve the stochastic optimization problem. Simulation results demonstrate that the proposed JWSPA strategy can save more transmit power than the traditional fixed waveform scheme under the same target tracking accuracy.
The widespread 5G base stations can be potential jammers for the vulnerable BeiDou B1I receivers due to its low power. Therefore, a novel analytical model is derived for the 5G signal to evaluate its impact on acquisition performance under three decision methods. The good agreement between the Monte Carlo method (MCM) through software defined receiver (SDR) and the derived expressions validates the effectiveness of the proposed algorithm. It can be found that the receivers exhibit varied responses for different 5G waveforms and decision strategies. The receiver also shows the least endurances for some kind of 5G waveforms, however, this kind of adverse effect can be cancelled by a reduced interference signal ratio (ISR), an increased integration time or a larger accumulation times.