Current Issue

24 February 2017, Volume 28 Issue 1
Quaternion based joint DOA and polarization parameters estimation with stretched three-component electromagnetic vector sensor array
Jichao Zhao and Haihong Tao
2017, 28(1):  1.  doi:10.21629/JSEE.2017.01.01
Abstract ( )   PDF (32423KB) ( )  
Related Articles | Metrics

The three-component electromagnetic vector sensor (EMVS) consisting of co-centered orthogonally oriented x-dipole, z-dipole and z-loop is considered. In order to make full use of the spatial aperture of each component, the original uniform linear three-component EMVS array (ULTEA) is stretched into one halfwavelengthspaced uniform linear loop subarray (ULLSA) along the z axis, and one sparse uniform linear co-centered orthogonally oriented dual-dipole (CODD) subarray (SULCSA) along the x axis. Then, a generalized rotation invariance based quaternion multiple signal classification (GRIQ-MUSIC) algorithm is presented for direction of arrival (DOA) and polarization parameters estimation. According to the proposed algorithm, the elevation angles are firstly estimated based on the half-wavelength spaced ULLSA. Then the polarization phase differences and azimuth angles are obtained based on the coupling relationship between the angle domain and polarization domain, but the azimuth angles are in coarse-resolution since the array aperture is not utilized. Next, the SULCSA is used to re-estimate the azimuth angles in fineresolution, and the ambiguity problem can be resolved by the least square method. Finally, based on the estimated elevation angles,azimuth angles and polarization phase differences, the corresponding auxiliary polarization angles can be estimated by N times one-dimensional parameter search, where N is the sources number, and the parameters are matched automatically. Based on the GRIQ-MUSIC algorithm, the high dimensional parameters search problem of the conventional Q-MUSIC algorithm is simplified to a one-dimensional parameter search problem, thus the proposed algorithm not only reduces the computation complexity considerably, but also avoids the performance degradation caused by the failure in parameters pairing. The simulation examples demonstrate the effectiveness and feasibility of the proposed algorithm.

Dimensions estimation for cone-cylinder target based on sliding-type scatterers analysis
Yuling Liu, Xizhang Wei, Bo Peng, and Yongshun Ling
2017, 28(1):  10.  doi:10.21629/JSEE.2017.01.02
Abstract ( )   PDF (9713KB) ( )  
Related Articles | Metrics

Scatterers often exhibit aspect or frequency dependence which affects the micro-Doppler shift in scattering response. For cone-cylinder targets, sliding-type scatterers which slide on the edge discontinuity with the change of the incident angle are the most dominant nonideal scattering models. A method is proposed to discriminate among the scatterers on the cone-cylinder target based on the deviation degree of micro-Doppler from sinusoid. By extracting the amplitude and initial phase, the micro-Doppler is estimated as sinusoid. Then the deviation degree is evaluated by the error between the extracted sinusoidal micro-Doppler and the actual micro-Doppler curve. Threshold for the classification is determined with the simulation data. After classification, the micro-Doppler features of sliding-type scatterers are exploited to estimate the target dimensions. The influence of parameters errors and noise on estimation of target dimensions is also illustrated with
the simulation data.

Efficient recovery of group-sparse signals with truncated and reweighted l2,1-regularization
Yan Zhang, Jichang Guo, and Xianguo Li
2017, 28(1):  19.  doi:10.21629/JSEE.2017.01.03
Abstract ( )   PDF (4651KB) ( )  
Related Articles | Metrics

The l2,1-norm regularization can efficiently recover group-sparse signals whose non-zero coefficients occur in a few groups. It is well known that the l2,1-norm regularization based on the classic alternating direction method shows strong stability and robustness in many applications. However, the l2,1-norm regularization
requires more measurements. In order to recover groupsparse signals with a better sparsity-measurement tradeoff, the truncated l2,1-norm regularization and reweighted l2,1-norm regularization are proposed for the recovery of group-sparse signals based on the iterative support detection. The proposed algorithms are tested and compared with the l2,1-norm model on a seriesof synthetic signals and the Shepp-Logan phantom. Experimental results demonstrate the performance of the proposed algorithms, especially at a low sample rate and high sparsity level.

Fractal detector design and application in maritime target detection
Xinglin Shen, Zhiyong Song, Yongfeng Zhu, and Qiang Fu
2017, 28(1):  27.  doi:10.21629/JSEE.2017.01.04
Abstract ( )   PDF (3617KB) ( )  
Related Articles | Metrics

The fractal properties of sea clutter are proposed and applied to the maritime target detection. Calculation of the measured data shows that, the Hurst exponent of the sea clutter with targets and the sea clutter without targets are different, which enables us to detect low-observable targets within the sea clutter. This paper explains the reason why the Hurst exponent can distinguish the presence or absence of targets in theory and proposes a fractal detector based on the Hurst exponent. Comparing the fractal detector proposed in this paper to the energy detector by the detection results of 140 frames data of the real sea clutter, it is demonstrated that the fractal detection method has a better detection performance. In order to get systemic conclusions, the new sea clutter data with different signal-to-clutter ratios (SCRs) are constructed in the way that add the sea clutter data with targets to the pure sea clutter data. The results show that the fractal detection method has a better performance than the statistical method on detection of maritime targets, especially maritime weak targets with low SCR.

Method for compensating translational motion of rotationally symmetric target based on local symmetry cancellation
Jingqing Li, Sisan He, Cunqian Feng, and Yizhe Wang
2017, 28(1):  36.  doi:10.21629/JSEE.2017.01.05
Abstract ( )   PDF (5301KB) ( )  
Related Articles | Metrics

The micro-Doppler effect of moving targets may suffer from aliasing and Doppler migration due to the translational motion,which affects the application in real-time target identification. A new compensating method for the rotationally symmetric target is proposed and demonstrated. By utilizing the micro-Doppler symmetry cancellation effect, the method can accurately compensate the translation effect, meanwhile, it behaves well in noise restraint. Computer simulations verify the high accuracy and efficiency of this method.

Motion compensation algorithm in narrowband imaging for mid-course targets
Xizhang Wei, Zhen Liu, Xiaofeng Ding, and Bo Peng
2017, 28(1):  40.  doi:10.21629/JSEE.2017.01.06
Abstract ( )   PDF (3972KB) ( )  
Related Articles | Metrics

Imaging precession targets based on the narrowband radar to distinguish the real warhead from mid-course targets is a new technique in missile defense systems. Aiming at the typical mid-course flying scene, this paper describes the principle of the narrowband imaging method, and analyzes the influence on the narrowband imaging by the target translational motion. Then the precision qualification is deduced for both the distance compensation and the velocity compensation, and the conclusion that the velocity compensation is more appropriate for the motion compensation in the narrowband imaging is given. Furthermore, this paper proposes a motion compensation algorithm for the narrowband radar which reduces the interference of the signal phase chaos caused by the rebuilt real signal phase, thus realizing the precise translational motion compensation of the narrowband signal. Finally, the effectiveness of this method is proved by both theoretic analysis and experiment results.

Entropy-based multipath detection model for MIMO radar
Junpeng Shi, Guoping Hu, and Hao Zhou
2017, 28(1):  51.  doi:10.21629/JSEE.2017.01.07
Abstract ( )   PDF (22024KB) ( )  
Related Articles | Metrics

An optimized detection model based on weighted entropy for multiple input multiple output (MIMO) radar in multipath environment is presented. After defining the multipath distance difference (MDD), the multipath received signal model with four paths is built systematically. Both the variance and correlation coefficient of multipath scattering coefficient with MDD are analyzed, which indicates that the multipath variable can decrease the detection performance by reducing the echo power. By making use of the likelihood ratio test (LRT), a new method based on weighted entropy is introduced to use the positive multipath echo power and suppress the negative echo power, which results in better performance. Simulation results show that, compared with non-multipath environment or other recently developed methods, the proposed method can achieve detection performance improvement with the increase of sensors.

Texture invariant estimation of equivalent number of looks based on log-cumulants in polarimetric radar imagery
Xianghui Yuan and Tao Liu
2017, 28(1):  58.  doi:10.21629/JSEE.2017.01.08
Abstract ( )   PDF (889KB) ( )  
Related Articles | Metrics

A novel estimation of the equivalent number of looks (ENL) is proposed in statistical modeling of multilook polarimetric synthetic aperture radar (PolSAR) images for the product model, which is based on the log-determinant moments (LDM). The LDM estimators discovered by looking at certain log-cumulants of the intensities of different polarization channels and the multilook polarimetric covariance matrix, which can be used for both the Gaussian model and all product models. This estimator has analytic expressions, and uses the full covariance matrix and intensities as input, which makes more statistical information available. Experiments based on simulated data and real data are performed. The comparisons among the widely used methods of equivalent number of looks (ENL) estimation for the product model such as K and G0 distributions show that the performance of the LDM estimator is outstanding. The performance of estimators for the real data of San Francisco and Flevoland is analyzed and the results are according to those of simulated data. Finally, it can be concluded that the LDM estimator is well robust to each product model with low computational complexity and high accuracy.

Role-based approaches for operational task-resource flexible matching model and algorithm
Zhigang Zou, Wangfang Che, Fuling Mu, Yiqian Chao, and Bo Zhang
2017, 28(1):  67.  doi:10.21629/JSEE.2017.01.09
Abstract ( )   PDF (6907KB) ( )  
Related Articles | Metrics

In order to solve the problem that previous relation researches in operational task-resource matching are hard to flexible computing for generating a new matching scheme from the existing scheme under the condition of operational tasks or resources adjusted in uncertainty cases, role-based approaches to operational task-resource flexible matching model and algorithm are proposed in the paper for flexible matching operational tasks with resources. Firstly, through introducing the concept of roles, the role-based calculation framework of operational task-resource flexible matching is constructed. Secondly, two aspect indexes about resource utilization ratio (RUR) and time utilization ratio (TUR) are given to reflect operational task-resource matching quality (TRMQ), which can be regarded as the objective function for modeling role-based operational task-resource flexible matching mathematical formulation. On the basis of this, the role-based artificial bee colony (RABC) algorithm, including five specific calculating operators, is put forward for solving the flexible matching problem with double encoding on operational tasks and resources by roles. Finally, via comparing with the previous methods in application cases, it can be validated that role-based model and proposed algorithm are more effective, which can be used to obtain a new incremental scheme from the existing matching scheme for adopting the operational tasks or resources adjustment. Moreover, there are flexible computation advantages of the approaches proposed in this paper to solve the problem about operational task resource matching in large-scale.

Construction of composite indicator system based on simulation data mining
Jianfei Ding, Guangya Si, Baoqiang Li, Jingyu Yang, and Yu Zhang
2017, 28(1):  81.  doi:10.21629/JSEE.2017.01.10
Abstract ( )   PDF (6383KB) ( )  
Related Articles | Metrics

The indicator system is the foundation and emphasis in the effectiveness evaluation of system of systems (SoS). In the past, indicator systems were founded based on qualitative methods, and every indicator was mainly determined by the expert with experience. This paper proposed a brand-new method to construct indicator systems based on the repeated simulation of the scenario space, and calculated by quantitative data. Firstly, the selection of key indicators using the Gini indicator importance measure (IIM) is calculated by random forests (RFs). Then, principal component analysis (PCA) is applied when we use the selected indicators to construct the composite indicator system of SoS. Furthermore, a set of rulesare is developed to verify the practicability of the indicator system such as correlation, robustness, accuracy and convergence. Experiment shows that the algorithm achieves good results for the construction of composite indicators of SoS.

Double weight determination method for experts of complex multi-attribute large-group decision-making in interval-valued intuitionistic fuzzy environment
Bingsheng Liu, Sijia Guo, Kaijing Yan, Ling Li, and Xueqing Wang
2017, 28(1):  88.  doi:10.21629/JSEE.2017.01.11
Abstract ( )   PDF (6372KB) ( )  
Related Articles | Metrics

 The systematic clustering analysis based weight determination methods are suitable for the experts of complex multiattribute large-group decision-making (CMALGDM) in intervalvalued intuitionistic fuzzy environment. However, these methods mainly have two shortcomings: they do not consider the consistency of the experts in each aggregation; the aggregation weights are often determined simply by the “majority principle”, and neglect the quantity of information provided by the holistic aggregation, leading to decision biases. Hence, a double weight determination method for experts is proposed to solve these problems. As for the first shortcoming, a mathematical programming model is used to solve for the optimal expert weights within each aggregation, ensuring consistency in the overall preferences of the aggregations. As for the second shortcoming, we propose a modification of the aggregation weights based on the information entropy, which fully considers both the number of experts and the amount of information provided by the holistic aggregation. With the proposed method, the final expert weights are determined more rigorously and objectively. The feasibility of the proposed method is investigated through an illustrative example.


New method for hybrid multiple attribute decision-making problems with decision maker’s aspiration level
Bengang Gong, Dandan Guo, and Wenqi Jiang
2017, 28(1):  97.  doi:10.21629/JSEE.2017.01.12
Abstract ( )   PDF (3963KB) ( )  
Related Articles | Metrics

How can we account for decision maker’s aspiration level in multiple attribute decision-making problems? This is a question that has recently gained much scholarly attention. This study draws on the prospect theory and the Dempster–Shafer theory to propose a method that can resolve this issue. First, the precise and interval numbers and linguistic labels are compared, and then, the decision matrix is transformed into a prospect decision matrix on the basis of decision maker’s aspiration level. Second, the Dempster–Shafer theory is applied to aggregate the prospect values of all alternatives on each attribute using their prospect belief intervals. Third, the orders of alternatives are obtained for all decision methods by comparing prospect belief intervals. Final, the effectiveness and feasibility of the proposed method are demonstrated by using an illustrative example.

Robust entry guidance using multi-segment linear pseudospectral model predictive control
Liang Yang, Wanchun Chen, Xiaoming Liu, and Hao Zhou
2017, 28(1):  103.  doi:10.21629/JSEE.2017.01.13
Abstract ( )   PDF (87586KB) ( )  
Related Articles | Metrics

 This paper presents a robust entry guidance algorithmfor the high lift-to-drag ratio entry vehicle that employs the recently developed pseudospectral model predict control in a segmented manner. Here, the guidance commands are the longitudinal liftto-drag (L/D) and bank reversal commands, which are calculated by successively solving multiple segment linear algebraic equations. These equations are derived using the linear pseudospectral method, control parametrization and calculus of variations. The method uses orthogonal polynomials and computes the updates through a series of analytical formulae, which makes it accurate and computationally effective. Moreover, it is able to adjust the number of bank reversals by providing the precise bank reversal point so as to fully exploit the potential of lateral maneuver. The method also employs proportional navigation and polynomial guidance after the last bank reversal to meet multiple terminal constraints. High-fidelity numerical simulations with various destinations are carried out to demonstrate its applicability. Furthermore, Monte Carlo simulations are also conducted to show that the proposed algorithm consistently offers very stable and robust performances and has superior performances in computational efficiency, guidance accuracy and lateral trajectory shaping capability in comparison with other typical methods.

Sensor fault-tolerant observer applied in UAV anti-skid braking control under control input constraint
Hui Sun, Jianguo Yan, Yaohong Qu, and Jie Ren
2017, 28(1):  126.  doi:10.21629/JSEE.2017.01.14
Abstract ( )   PDF (4741KB) ( )  
Related Articles | Metrics

 This paper proposes a method for addressing the problem of sensor fault-tolerant control (FTC) for anti-skid braking systems (ABSs). When the wheel velocity sensor of the ABS for unmanned aerial vehicles (UAVs) becomes faulty, wheel velocity failure and feedback instability may occur. Firstly, a fault diagnosis and isolation (FDI) method based on a sliding mode observer approach is introduced to detect and isolate the fault of the sensor. When the wheel velocity sensor is in healthy conditions, the observer works in a diagnosis mode. If faults occur in the sensor, it acts as a wheel velocity estimator. Secondly, an FTC strategy, adopting a feedback compensation structure, is designed with input control constraints. In addition, based on the FDI result, a terminal sliding mode (TSM) controller is designed to guarantee that slip-ratio tracks its appropriate reference values in situations where runways change conditions during landing. The control system,switches automatically from control using a wheel velocity sensor to sensor less control mode, so the observer-based FTC scheme is established. It is logical that the ABS keeps observedstate and remains stable when the wheel velocity sensor is broken and during external disturbance. Finally, simulation results show the effectiveness of the proposed method.

Optimization of projectile state and trajectory of reentry body considering attainment of carrying aircraft
Changsheng Gao, Chunwang Jiang, and Wuxing Jing
2017, 28(1):  137.  doi:10.21629/JSEE.2017.01.15
Abstract ( )   PDF (4134KB) ( )  
Related Articles | Metrics

As an improvement of traditional maximum range trajectory optimization, both the projectile state and control sequence of reentry body are optimized to achieve the maximum range. To ensure the optimal projectile state is in the reachable domain of the carrying aircraft, the climb of the carrying aircraft and the flight of reentry body are optimized as a whole based on the hp-adaptive pseudospectral method. Meanwhile, the path constraints, control constraints and initial/terminal states constraints are considered in the optimization as well. In face of the fact that the range of the optimal control solution resulted from the pseudospectral method may be unsmooth, virtual control variables (second derivatives of the angle of attack and bank angle) are proposed in the optimization, while the angle of attack and bank angle are treated as the virtual state variables to guarantee the smoothness of the optimal control solution. The validity of the virtual control variables in smoothing the optimal control solution is verified by simulation, and the optimality of the resulting projectile state and control sequence for reaching the maximum range is confirmed using the indirect approach.

Terminal sliding mode control based on super-twisting algorithm
Zhanshan Zhao, Hongru Gu, Jing Zhang, and Gang Ding
2017, 28(1):  145.  doi:10.21629/JSEE.2017.01.16
Abstract ( )   PDF (5138KB) ( )  
Related Articles | Metrics

An improved non-singular terminal sliding mode control based on the super-twisting algorithm is proposed for a class of second-order uncertain nonlinear systems. This method can effectively avoid the singularity problem and obviously reduce the chattering phenomenon. The stability of the proposed procedure
is proven to be finite-time convergence using the Lyapunov theory against uncertain unmodeled dynamic and external disturbances. An example is given to show the proposed improved non-singular terminal sliding mode control (SMC) law effectively

Unsupervised feature selection based on Markov blanket and particle swarm optimization
Yintong Wang, Jiandong Wang, Hao Liao, and Haiyan Chen
2017, 28(1):  151.  doi:10.21629/JSEE.2017.01.17
Abstract ( )   PDF (1537KB) ( )  
Related Articles | Metrics

Feature selection plays an important role in data mining and recognition, especially in the large scale text, image and biological data. Specifically, the class label information is unavailable to guide the selection of minimal feature subset in unsupervised feature selection, which is challenging and interesting. An unsupervised
feature selection based on Markov blanket and particle swarm optimization is proposed named as UFSMB-PSO. The proposed method seeks to find the high-quality feature subset through multi-particles’ cooperation of particle swarm optimization without using any learning algorithms. Moreover, the features’ relevance will be computed based on an information metric of relevance gain, which provides an information theoretical foundation for finding the minimization of the redundancy between features. Our results on several benchmark datasets demonstrate that UFSMB-PSO can achieve significant improvement over state of the art unsupervised methods.

Convolutional neural networks for time series classification
Bendong Zhao, Huanzhang Lu, Shangfeng Chen, Junliang Liu, and Dongya Wu
2017, 28(1):  162.  doi:10.21629/JSEE.2017.01.18
Abstract ( )   PDF (32924KB) ( )  
Related Articles | Metrics

Time series classification is an important task in time series data mining, and has attracted great interests and tremendous efforts during last decades. However, it remains a challenging problem due to the nature of time series data: high dimensionality, large in data size and updating continuously. The deep learning techniques are explored to improve the performance of traditional feature-based approaches. Specifically, a novel convolutional neural network (CNN) framework is proposed for time series classification. Different from other feature-based classification approaches, CNN can discover and extract the suitable internal structure to generate deep features of the input time series automatically by using convolution and pooling operations. Two groups of experiments are conducted on simulated data sets and eight groups of experiments are conducted on real-world data sets from different application domains. The final experimental results show that the
proposed method outperforms state-of-the-art methods for time series classification in terms of the classification accuracy and noise tolerance.

Efficient privacy-preserving classification construction model with differential privacy technology
Lin Zhang, Yan Liu, Ruchuan Wang, Xiong Fu, and Qiaomin Lin
2017, 28(1):  170.  doi:10.21629/JSEE.2017.01.19
Abstract ( )   PDF (24754KB) ( )  
Related Articles | Metrics

To address the problem of privacy disclosure during data mining, a new privacy-preserving decision tree classification construction model based on a differential privacy-protection mechanism is presented. An efficient classifier that uses feedback to add two types of noise via Laplace and exponential mechanisms to perturb the calculation results are introduced to the construction algorithm that provides a secure data access interface for users. Different split solutions for attributes of continuous and discrete values are provided and used to optimize the search scheme to reduce the error rate of the classifier. By choosing an available quality function with lower sensitivity for making decisions and improving the privacy budget allocation methods, the algorithm effectively resists malicious attacks that depend on the background knowledge. The potential problem of obtaining personal information by guessing unknown sensitive nodes of tree-type data is solved correspondingly. The better privacy preservation and accuracy of this new algorithm are shown by simulation experiments.

Reliability indexes for multi-AUV cooperative systems
Qingwei Liang, Tianyuan Sun, and Dongdong Wang
2017, 28(1):  179.  doi:10.21629/JSEE.2017.01.20
Abstract ( )   PDF (3902KB) ( )  
Related Articles | Metrics

With the development of multi-autonomous underwater vehicle (AUV) cooperative systems, the evaluation of their reliability is becoming more and more important. Aimed at the characteristics of the system, three reliability indexes – standard entropy of rank distribution, all-terminal reliability and standard natural connectivity are determined in this paper. Topology structure and underwater acoustic communication are considered. The calculation methods of the three reliability indexes are proposed, and the characteristics of these indexes are analyzed. The reliability indexes of two multi-AUV cooperative systems are obtained as examples. Results show that these three indexes reflect different aspects of reliability for multi-AUV cooperative systems. This paper provides some methods to evaluate the reliability of multi-AUV cooperative systems, which is of practical value.

Fully Bayesian reliability assessment of multi-state systems with overlapping data
Zhipeng Hao, Jianbin Guo, and Shengkui Zeng
2017, 28(1):  187.  doi:10.21629/JSEE.2017.01.21
Abstract ( )   PDF (32559KB) ( )  
Related Articles | Metrics

 The failure data at the system level are often limited, resulting in high uncertainty to system reliability assessment. Integrating data drawn from various structural levels of the target system (e.g. the system, subsystems, assemblies and components), i.e. the multi-level data, through Bayesian analysis can improve the precision of system reliability assessment. However, if the multi-level data are overlapping, it is challenging for Bayesian integration to develop the likelihood function. Especially for multistate systems (MSS), the Bayesian integration with overlapping data is even more difficult. The major disadvantage of previousapproaches is the intensive computation for the development of the likelihood function caused by the workload to opt the appropriate combinations of the vectors of component states consistent with the overlapping data. An improved fully Bayesian integration approach from a geometric perspective is proposed for the reliability assessment of MSS with overlapping data. In this method, a specific combination of component states is regarded as a state vector, which leads to a specific system state of the MSS, and all state vectors generate a system state space. The overlapping data are regarded as the constraints which create hyperplanes in the system state space. And a point in a hyperplane corresponds to a particular combination of the state vectors. In the light of the features of the constraints, the proposed approach introduces space partition and hyperplane segmentation, which reduces the selection workload significantly and simplifies the likelihood function for overlapping data. Two examples demonstrate the feasibility and efficiency of the proposed approach.