Deep neural networks (DNNs) have achieved great success in many data processing applications. However, high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices, and it is not environmental-friendly with much power cost. In this paper, we focus on low-rank optimization for efficient deep learning techniques. In the space domain, DNNs are compressed by low rank approximation of the network parameters, which directly reduces the storage requirement with a smaller number of network parameters. In the time domain, the network parameters can be trained in a few subspaces, which enables efficient training for fast convergence. The model compression in the spatial domain is summarized into three categories as pre-train, pre-set, and compression-aware methods, respectively. With a series of integrable techniques discussed, such as sparse pruning, quantization, and entropy coding, we can ensemble them in an integration framework with lower computational complexity and storage. In addition to summary of recent technical advances, we have two findings for motivating future works. One is that the effective rank, derived from the Shannon entropy of the normalized singular values, outperforms other conventional sparse measures such as the $ \ell_1 $ norm for network compression. The other is a spatial and temporal balance for tensorized neural networks. For accelerating the training of tensorized neural networks, it is crucial to leverage redundancy for both model compression and subspace training.
With the extensive application of large-scale array antennas, the increasing number of array elements leads to the increasing dimension of received signals, making it difficult to meet the real-time requirement of direction of arrival (DOA) estimation due to the computational complexity of algorithms. Traditional subspace algorithms require estimation of the covariance matrix, which has high computational complexity and is prone to producing spurious peaks. In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements, this paper proposes a DOA estimation method based on Krylov subspace and weighted $ {l}_{1} $-norm. The method uses the multistage Wiener filter (MSWF) iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace, further uses the measurement matrix to reduce the dimensionality of the signal subspace observation, constructs a weighted matrix, and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted $ {l}_{1} $-norm to solve the target DOA. Simulation results show that the proposed method has high resolution under large array conditions, effectively suppresses spurious peaks, reduces computational complexity, and has good robustness for low signal to noise ratio (SNR) environment.
The warhead of a ballistic missile may precess due to lateral moments during release. The resulting micro-Doppler effect is determined by parameters such as the target’s motion state and size. A three-dimensional reconstruction method for the precession warhead via the micro-Doppler analysis and inverse Radon transform (IRT) is proposed in this paper. The precession parameters are extracted by the micro-Doppler analysis from three radars, and the IRT is used to estimate the size of targe. The scatterers of the target can be reconstructed based on the above parameters. Simulation experimental results illustrate the effectiveness of the proposed method in this paper.
Linear minimum mean square error (MMSE) detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output (MIMO) systems but inevitably involves complicated matrix inversion, which entails high complexity. To avoid the exact matrix inversion, a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed. By combining the advantages of both the explicit and the implicit matrix inversion, this paper introduces a new low-complexity signal detection algorithm. Firstly, the relationship between implicit and explicit techniques is analyzed. Then, an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems. The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration. However, its complexity is still high for higher iterations. Thus, it is applied only for first two iterations. For subsequent iterations, we propose a novel trace iterative method (TIM) based low-complexity algorithm, which has significantly lower complexity than higher Newton iterations. Convergence guarantees of the proposed detector are also provided. Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
Existing specific emitter identification (SEI) methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages, which reduce the identification accuracy of emitters and complicate the procedures of identification. In this paper, we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints (RFFs), namely, RFFsNet-SEI. Particularly, we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition (VMD) and Hilbert transform (HT). The physical RFFs and I-Q data are formed into the balanced-RFFs, which are then used to train RFFsNet-SEI. As introducing model-aided RFFs into neural network, the hybrid-driven scheme including physical features and I-Q data is constructed. It improves physical interpretability of RFFsNet-SEI. Meanwhile, since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end, it accelerates SEI implementation and simplifies procedures of identification. Moreover, as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI, identification accuracy is improved. Finally, we compare RFFsNet-SEI with the counterparts in terms of identification accuracy, computational complexity, and prediction speed. Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.
Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and far-field models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position, which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization (HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.