By deploying the ubiquitous and reliable coverage of low Earth orbit (LEO) satellite networks using optical inter satellite link (OISL), computation offloading services can be provided for any users without proximal servers, while the resource limitation of both computation and storage on satellites is the important factor affecting the maximum task completion time. In this paper, we study a delay-optimal multi-satellite collaborative computation offloading scheme that allows satellites to actively migrate tasks among themselves by employing the high-speed OISLs, such that tasks with long queuing delay will be served as quickly as possible by utilizing idle computation resources in the neighborhood. To satisfy the delay requirement of delay-sensitive task, we first propose a deadline-aware task scheduling scheme in which a priority model is constructed to sort the order of tasks being served based on its deadline, and then a delay-optimal collaborative offloading scheme is derived such that the tasks which cannot be completed locally can be migrated to other idle satellites. Simulation results demonstrate the effectiveness of our multi-satellite collaborative computation offloading strategy in reducing task complement time and improving resource utilization of the LEO satellite network.
Code acquisition is the kernel operation for signal synchronization in the spread-spectrum receiver. To reduce the computational complexity and latency of code acquisition, this paper proposes an efficient scheme employing sparse Fourier transform (SFT) and the relevant hardware architecture for field programmable gate array (FPGA) and application-specific integrated circuit (ASIC) implementation. Efforts are made at both the algorithmic level and the implementation level to enable merged searching of code phase and Doppler frequency without incurring massive hardware expenditure. Compared with the existing code acquisition approaches, it is shown from theoretical analysis and experimental results that the proposed design can shorten processing latency and reduce hardware complexity without degrading the acquisition probability.
Deep neural networks (DNNs) have achieved great success in many data processing applications. However, high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices, and it is not environmental-friendly with much power cost. In this paper, we focus on low-rank optimization for efficient deep learning techniques. In the space domain, DNNs are compressed by low rank approximation of the network parameters, which directly reduces the storage requirement with a smaller number of network parameters. In the time domain, the network parameters can be trained in a few subspaces, which enables efficient training for fast convergence. The model compression in the spatial domain is summarized into three categories as pre-train, pre-set, and compression-aware methods, respectively. With a series of integrable techniques discussed, such as sparse pruning, quantization, and entropy coding, we can ensemble them in an integration framework with lower computational complexity and storage. In addition to summary of recent technical advances, we have two findings for motivating future works. One is that the effective rank, derived from the Shannon entropy of the normalized singular values, outperforms other conventional sparse measures such as the $ \ell_1 $ norm for network compression. The other is a spatial and temporal balance for tensorized neural networks. For accelerating the training of tensorized neural networks, it is crucial to leverage redundancy for both model compression and subspace training.
Beam-hopping technology has become one of the major research hotspots for satellite communication in order to enhance their communication capacity and flexibility. However, beam hopping causes the traditional continuous time-division multiplexing signal in the forward downlink to become a burst signal, satellite terminal receivers need to solve multiple key issues such as burst signal rapid synchronization and high-performance reception. Firstly, this paper analyzes the key issues of burst communication for traffic signals in beam hopping systems, and then compares and studies typical carrier synchronization algorithms for burst signals. Secondly, combining the requirements of beam-hopping communication systems for efficient burst and low signal-to-noise ratio reception of downlink signals in forward links, a decoding assisted bidirectional variable parameter iterative carrier synchronization technique is proposed, which introduces the idea of iterative processing into carrier synchronization. Aiming at the technical characteristics of communication signal carrier synchronization, a new technical approach of bidirectional variable parameter iteration is adopted, breaking through the traditional understanding that loop structures cannot adapt to low signal-to-noise ratio burst demodulation. Finally, combining the DVB-S2X standard physical layer frame format used in high throughput satellite communication systems, the research and performance simulation are conducted. The results show that the new technology proposed in this paper can significantly shorten the carrier synchronization time of burst signals, achieve fast synchronization of low signal-to-noise ratio burst signals, and have the unique advantage of flexible and adjustable parameters.
Deep learning has achieved excellent results in various tasks in the field of computer vision, especially in fine-grained visual categorization. It aims to distinguish the subordinate categories of the label-level categories. Due to high intra-class variances and high inter-class similarity, the fine-grained visual categorization is extremely challenging. This paper first briefly introduces and analyzes the related public datasets. After that, some of the latest methods are reviewed. Based on the feature types, the feature processing methods, and the overall structure used in the model, we divide them into three types of methods: methods based on general convolutional neural network (CNN) and strong supervision of parts, methods based on single feature processing, and methods based on multiple feature processing. Most methods of the first type have a relatively simple structure, which is the result of the initial research. The methods of the other two types include models that have special structures and training processes, which are helpful to obtain discriminative features. We conduct a specific analysis on several methods with high accuracy on public datasets. In addition, we support that the focus of the future research is to solve the demand of existing methods for the large amount of the data and the computing power. In terms of technology, the extraction of the subtle feature information with the burgeoning vision transformer (ViT) network is also an important research direction.
A low-Earth-orbit (LEO) satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking. However, the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service. Moreover, the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas. To enhance the forwarding capability of satellite networks, we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall. Then, we propose a multi-region cooperative traffic scheduling algorithm. The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding, significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding. This algorithm can utilize all the global satellite resources and improve the utilization of network resources. We model the cooperative multi-region scheduling of large-scale LEO satellites. Based on the model, we build a system testbed using OMNET++ to compare the proposed method with existing techniques. The simulations show that our proposed method can reduce the packet loss probability by 30% and improve the resource utilization ratio by 3.69%.
A dynamic multi-beam resource allocation algorithm for large low Earth orbit (LEO) constellation based on on-board distributed computing is proposed in this paper. The allocation is a combinatorial optimization process under a series of complex constraints, which is important for enhancing the matching between resources and requirements. A complex algorithm is not available because that the LEO on-board resources is limited. The proposed genetic algorithm (GA) based on two-dimensional individual model and uncorrelated single paternal inheritance method is designed to support distributed computation to enhance the feasibility of on-board application. A distributed system composed of eight embedded devices is built to verify the algorithm. A typical scenario is built in the system to evaluate the resource allocation process, algorithm mathematical model, trigger strategy, and distributed computation architecture. According to the simulation and measurement results, the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91% in a typical scene. The response time is decreased by 40% compared with the conditional GA.
With the extensive application of large-scale array antennas, the increasing number of array elements leads to the increasing dimension of received signals, making it difficult to meet the real-time requirement of direction of arrival (DOA) estimation due to the computational complexity of algorithms. Traditional subspace algorithms require estimation of the covariance matrix, which has high computational complexity and is prone to producing spurious peaks. In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements, this paper proposes a DOA estimation method based on Krylov subspace and weighted $ {l}_{1} $-norm. The method uses the multistage Wiener filter (MSWF) iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace, further uses the measurement matrix to reduce the dimensionality of the signal subspace observation, constructs a weighted matrix, and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted $ {l}_{1} $-norm to solve the target DOA. Simulation results show that the proposed method has high resolution under large array conditions, effectively suppresses spurious peaks, reduces computational complexity, and has good robustness for low signal to noise ratio (SNR) environment.
The warhead of a ballistic missile may precess due to lateral moments during release. The resulting micro-Doppler effect is determined by parameters such as the target’s motion state and size. A three-dimensional reconstruction method for the precession warhead via the micro-Doppler analysis and inverse Radon transform (IRT) is proposed in this paper. The precession parameters are extracted by the micro-Doppler analysis from three radars, and the IRT is used to estimate the size of targe. The scatterers of the target can be reconstructed based on the above parameters. Simulation experimental results illustrate the effectiveness of the proposed method in this paper.
The acquisition, analysis, and prediction of the radar cross section (RCS) of a target have extremely important strategic significance in the military. However, the RCS values at all azimuths are hardly accessible for non-cooperative targets, due to the limitations of radar observation azimuth and detection resources. Despite their efforts to predict the azimuth-dimensional RCS value, traditional methods based on statistical theory fails to achieve the desired results because of the azimuth sensitivity of the target RCS. To address this problem, an improved neural basis expansion analysis for interpretable time series forecasting (N-BEATS) network considering the physical model prior is proposed to predict the azimuth-dimensional RCS value accurately. Concretely, physical model-based constraints are imposed on the network by constructing a scattering-center module based on the target scattering-center model. Besides, a superimposed seasonality module is involved to better capture high-frequency information, and augmenting the training set provides complementary information for learning predictions. Extensive simulations and experimental results are provided to validate the effectiveness of the proposed method.
Imaging detection is an important means to obtain target information. The traditional imaging detection technology mainly collects the intensity information and spectral information of the target to realize the classification of the target. In practical applications, due to the mixed scenario, it is difficult to meet the needs of target recognition. Compared with intensity detection, the method of polarization detection can effectively enhance the accuracy of ground object target recognition (such as the camouflage target). In this paper, the reflection mechanism of the target surface is studied from the microscopic point of view, and the polarization characteristic model is established to express the relationship between the polarization state of the reflected signal and the target surface parameters. The polarization characteristic test experiment is carried out, and the target surface parameters are retrieved using the experimental data. The results show that the degree of polarization (DOP) is closely related to the detection zenith angle and azimuth angle. The (DOP) of the target is the smallest in the direction of light source incidence and the largest in the direction of specular reflection. Different materials have different polarization characteristics. By comparing their DOP, target classification can be achieved.
To analyze the influence of time synchronization error, phase synchronization error, frequency synchronization error, internal delay of the transceiver system, and range error and angle error between the unit radars on the target detection performance, firstly, a spatial detection model of distributed high-frequency surface wave radar (distributed-HFSWR) is established in this paper. In this model, a method for accurate extraction of direct wave spectrum based on curve fitting is proposed to obtain accurate system internal delay and frequency synchronization error under complex electromagnetic environment background and low signal to noise ratio (SNR), and to compensate for the shift of range and Doppler frequency caused by time-frequency synchronization error. The direct wave component is extracted from the spectrum, the range estimation error and Doppler estimation error are reduced by the method of curve fitting, and the fitting accuracy of the parameters is improved. Then, the influence of frequency synchronization error on target range and radial Doppler velocity is quantitatively analyzed. The relationship between frequency synchronization error and radial Doppler velocity shift and range shift is given. Finally, the system synchronization parameters of the trial distributed-HFSWR are obtained by the proposed spectrum extraction method based on curve fitting, the experimental data is compensated to correct the shift of the target, and finally the correct target parameter information is obtained. Simulations and experimental results demonstrate the superiority and correctness of the proposed method, theoretical derivation and detection model proposed in this paper.
Existing specific emitter identification (SEI) methods based on hand-crafted features have drawbacks of losing feature information and involving multiple processing stages, which reduce the identification accuracy of emitters and complicate the procedures of identification. In this paper, we propose a deep SEI approach via multidimensional feature extraction for radio frequency fingerprints (RFFs), namely, RFFsNet-SEI. Particularly, we extract multidimensional physical RFFs from the received signal by virtue of variational mode decomposition (VMD) and Hilbert transform (HT). The physical RFFs and I-Q data are formed into the balanced-RFFs, which are then used to train RFFsNet-SEI. As introducing model-aided RFFs into neural network, the hybrid-driven scheme including physical features and I-Q data is constructed. It improves physical interpretability of RFFsNet-SEI. Meanwhile, since RFFsNet-SEI identifies individual of emitters from received raw data in end-to-end, it accelerates SEI implementation and simplifies procedures of identification. Moreover, as the temporal features and spectral features of the received signal are both extracted by RFFsNet-SEI, identification accuracy is improved. Finally, we compare RFFsNet-SEI with the counterparts in terms of identification accuracy, computational complexity, and prediction speed. Experimental results illustrate that the proposed method outperforms the counterparts on the basis of simulation dataset and real dataset collected in the anechoic chamber.
The direction of ground-based interference reaching the satellite is generally very close to the spot beam of the satellite. The traditional array anti-jamming method may cause significant loss to the uplink signal while suppressing the interference. In this paper, an aperiodic multistage array is used, and a sub-array aperiodic distribution optimization scheme based on parallel differential evolution is proposed, which effectively improves the beam resolution and suppresses the grating lobe. On this basis, a two-stage signal processing method is used to suppress interference. Finally, the comprehensive performance of the proposed scheme is evaluated and verified.
Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and far-field models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position, which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization (HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.
Linear minimum mean square error (MMSE) detection has been shown to achieve near-optimal performance for massive multiple-input multiple-output (MIMO) systems but inevitably involves complicated matrix inversion, which entails high complexity. To avoid the exact matrix inversion, a considerable number of implicit and explicit approximate matrix inversion based detection methods is proposed. By combining the advantages of both the explicit and the implicit matrix inversion, this paper introduces a new low-complexity signal detection algorithm. Firstly, the relationship between implicit and explicit techniques is analyzed. Then, an enhanced Newton iteration method is introduced to realize an approximate MMSE detection for massive MIMO uplink systems. The proposed improved Newton iteration significantly reduces the complexity of conventional Newton iteration. However, its complexity is still high for higher iterations. Thus, it is applied only for first two iterations. For subsequent iterations, we propose a novel trace iterative method (TIM) based low-complexity algorithm, which has significantly lower complexity than higher Newton iterations. Convergence guarantees of the proposed detector are also provided. Numerical simulations verify that the proposed detector exhibits significant performance enhancement over recently reported iterative detectors and achieves close-to-MMSE performance while retaining the low-complexity advantage for systems with hundreds of antennas.
With the rapid development of cloud manufacturing technology and the new generation of artificial intelligence technology, the new cloud manufacturing system (NCMS) built on the connotation of cloud manufacturing 3.0 presents a new business model of “Internet of everything, intelligent leading, data driving, shared services, cross-border integration, and universal innovation”. The network boundaries are becoming increasingly blurred, NCMS is facing security risks such as equipment unauthorized use, account theft, static and extensive access control policies, unauthorized access, supply chain attacks, sensitive data leaks, and industrial control vulnerability attacks. Traditional security architectures mainly use information security technology, which cannot meet the active security protection requirements of NCMS. In order to solve the above problems, this paper proposes an integrated cloud-edge-terminal security system architecture of NCMS. It adopts the zero trust concept and effectively integrates multiple security capabilities such as network, equipment, cloud computing environment, application, identity, and data. It adopts a new access control mode of “continuous verification + dynamic authorization”, classified access control mechanisms such as attribute-based access control, role-based access control, policy-based access control, and a new data security protection system based on blockchain, achieving “trustworthy subject identity, controllable access behavior, and effective protection of subject and object resources”. This architecture provides an active security protection method for NCMS in the digital transformation of large enterprises, and can effectively enhance network security protection capabilities and cope with increasingly severe network security situations.
Low Earth orbit (LEO) satellite networks exhibit distinct characteristics, e.g., limited resources of individual satellite nodes and dynamic network topology, which have brought many challenges for routing algorithms. To satisfy quality of service (QoS) requirements of various users, it is critical to research efficient routing strategies to fully utilize satellite resources. This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks, which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources. An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm. Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
With the rapid development of low-orbit satellite communication networks both domestically and internationally, space-terrestrial integrated networks will become the future development trend. For space and terrestrial networks with limited resources, the utilization efficiency of the entire space-terrestrial integrated networks resources can be affected by the core network indirectly. In order to improve the response efficiency of core networks expansion construction, early warning of the core network elements capacity is necessary. Based on the integrated architecture of space and terrestrial network, multidimensional factors are considered in this paper, including the number of terminals, login users, and the rules of users’ migration during holidays. Using artifical intelligence (AI) technologies, the registered users of the access and mobility management function (AMF), authorization users of the unified data management (UDM), protocol data unit (PDU) sessions of session management function (SMF) are predicted in combination with the number of login users, the number of terminals. Therefore, the core network elements capacity can be predicted in advance. The proposed method is proven to be effective based on the data from real network.
To reduce the negative impact of the power amplifier (PA) nonlinear distortion caused by the orthogonal frequency division multiplexing (OFDM) waveform with high peak-to-average power ratio (PAPR) in integrated radar and communication (RadCom) systems is studied, the channel estimation in passive sensing scenarios. Adaptive channel estimation methods are proposed based on different pilot patterns, considering nonlinear distortion and channel sparsity. The proposed methods achieve sparse channel results by manipulating the least squares (LS) frequency-domain channel estimation results to preserve the most significant taps. The decision-aided method is used to optimize the sparse channel results to reduce the effect of nonlinear distortion. Numerical results show that the channel estimation performance of the proposed methods is better than that of the conventional methods under different pilot patterns. In addition, the bit error rate performance in communication and passive radar detection performance show that the proposed methods have good comprehensive performance.
There is a growing body of research on the swarm unmanned aerial vehicle (UAV) in recent years, which has the characteristics of small, low speed, and low height as radar target. To confront the swarm UAV, the design of anti-UAV radar system based on multiple input multiple output (MIMO) is put forward, which can elevate the performance of resolution, angle accuracy, high data rate, and tracking flexibility for swarm UAV detection. Target resolution and detection are the core problem in detecting the swarm UAV. The distinct advantage of MIMO system in angular accuracy measurement is demonstrated by comparing MIMO radar with phased array radar. Since MIMO radar has better performance in resolution, swarm UAV detection still has difficulty in target detection. This paper proposes a multi-mode data fusion algorithm based on deep neural networks to improve the detection effect. Subsequently, signal processing and data processing based on the detection fusion algorithm above are designed, forming a high resolution detection loop. Several simulations are designed to illustrate the feasibility of the designed system and the proposed algorithm.
Tracking the fast-moving object in occlusion situations is an important research topic in computer vision. Despite numerous notable contributions have been made in this field, few of them simultaneously incorporate both object’s extrinsic features and intrinsic motion patterns into their methodologies, thereby restricting the potential for tracking accuracy improvement. In this paper, on the basis of efficient convolution operators (ECO) model, a speed-accuracy-balanced model is put forward. This model uses the simple correlation filter to track the object in real-time, and adopts the sophisticated deep-learning neural network to extract high-level features to train a more complex filter correcting the tracking mistakes, when the tracking state is judged to be poor. Furthermore, in the context of scenarios involving regular fast-moving, a motion model based on Kalman filter is designed which greatly promotes the tracking stability, because this motion model could predict the object’s future location from its previous movement pattern. Additionally, instead of periodically updating our tracking model and training samples, a constrained condition for updating is proposed, which effectively mitigates contamination to the tracker from the background and undesirable samples avoiding model degradation when occlusion happens. From comprehensive experiments, our tracking model obtains better performance than ECO on object tracking benchmark 2015 (OTB100), and improves the area under curve (AUC) by about 8% and 32% compared with ECO, in the scenarios of fast-moving and occlusion on our own collected dataset.
In challenging situations, such as low illumination, rain, and background clutter, the stability of the thermal infrared (TIR) spectrum can help red, green, blue (RGB) visible spectrum to improve tracking performance. However, the high-level image information and the modality-specific features have not been sufficiently studied. The proposed correlation filter uses the fused saliency content map to improve filter training and extracts different features of modalities. The fused content map is introduced into the spatial regularization term of correlation filter to highlight the training samples in the content region. Furthermore, the fused content map can avoid the incompleteness of the content region caused by challenging situations. Additionally, different features are extracted according to the modality characteristics and are fused by the designed response-level fusion strategy. The alternating direction method of multipliers (ADMM) algorithm is used to solve the tracker training efficiently. Experiments on the large-scale benchmark datasets show the effectiveness of the proposed tracker compared to the state-of-the-art traditional trackers and the deep learning based trackers.
Discrete event system (DES) models promote system engineering, including system design, verification, and assessment. The advancement in manufacturing technology has endowed us to fabricate complex industrial systems. Consequently, the adoption of advanced modeling methodologies adept at handling complexity and scalability is imperative. Moreover, industrial systems are no longer quiescent, thus the intelligent operations of the systems should be dynamically specified in the model. In this paper, the composition of the subsystem behaviors is studied to generate the complexity and scalability of the global system model, and a Boolean semantic specifying algorithm is proposed for generating dynamic intelligent operations in the model. In traditional modeling approaches, the change or addition of specifications always necessitates the complete resubmission of the system model, a resource-consuming and error-prone process. Compared with traditional approaches, our approach has three remarkable advantages: (i) an established Boolean semantic can be fitful for all kinds of systems; (ii) there is no need to resubmit the system model whenever there is a change or addition of the operations; (iii) multiple specifying tasks can be easily achieved by continuously adding a new semantic. Thus, this general modeling approach has wide potential for future complex and intelligent industrial systems.
Final velocity and impact angle are critical to missile guidance. Computationally efficient guidance law with comprehensive consideration of the two performance merits is challenging yet remains less addressed. Therefore, this paper seeks to solve a type of optimal control problem that maximizes final velocity subject to equality point constraint of impact angle constraint. It is proved that the crude problem of maximizing final velocity is equivalent to minimizing a quadratic-form cost of curvature. The closed-form guidance law is henceforth derived using optimal control theory. The derived analytical guidance law coincides with the widely-used optimal guidance law with impact angle constraint (OGL-IAC) with a set of navigation parameters of two and six. On this basis, the optimal emission angle is determined to further increase the final velocity. The derived optimal value depends solely on the initial line-of-sight angle and impact angle constraint, and thus practical for real-world applications. The proposed guidance law is validated by numerical simulation. The results show that the OGL-IAC is superior to the benchmark guidance laws both in terms of final velocity and missing distance.
The application scope of the forward scatter radar (FSR) based on the Global Navigation Satellite System (GNSS) can be expanded by improving the detection capability. Firstly, the forward-scatter signal model when the target crosses the baseline is constructed. Then, the detection method of the forward-scatter signal based on the Rényi entropy of time-frequency distribution is proposed and the detection performance with different time-frequency distributions is compared. Simulation results show that the method based on the smooth pseudo Wigner-Ville distribution (SPWVD) can achieve the best performance. Next, combined with the geometry of FSR, the influence on detection performance of the relative distance between the target and the baseline is analyzed. Finally, the proposed method is validated by the anechoic chamber measurements and the results show that the detection ability has a 10 dB improvement compared with the common constant false alarm rate (CFAR) detection.
Today’s air combat has reached a high level of uncertainty where continuous or discrete variables with crisp values cannot be properly represented using fuzzy sets. With a set of membership functions, fuzzy logic is well-suited to tackle such complex states and actions. However, it is not necessary to fuzzify the variables that have definite discrete semantics. Hence, the aim of this study is to improve the level of model abstraction by proposing multiple levels of cascaded hierarchical structures from the perspective of function, namely, the functional decision tree. This method is developed to represent behavioral modeling of air combat systems, and its metamodel, execution mechanism, and code generation can provide a sound basis for function-based behavioral modeling. As a proof of concept, an air combat simulation is developed to validate this method and the results show that the fighter Alpha built using the proposed framework provides better performance than that using default scripts.
This paper considers an intelligent reflecting surface (IRS)-assisted multiple-input multiple-output (MIMO) system. To maximize the average achievable rate (AAR) under outdated channel state information (CSI), we propose a twin-timescale passive beamforming (PBF) and power allocation protocol which can reduce the IRS configuration and training overhead. Specifically, the short-timescale power allocation is designed with the outdated precoder and fixed PBF. A new particle swarm optimization (PSO)-based long-timescale PBF optimization is proposed, where mini-batch channel samples are utilized to update the fitness function. Finally, simulation results demonstrate the effectiveness of the proposed method.
Unmanned aerial vehicles (UAVs) may be subjected to unintentional radio frequency interference (RFI) or hostile jamming attack which will lead to fail to track global navigation satellite system (GNSS) signals. Therefore, the simultaneous realization of anti-jamming and high-precision carrier phase difference positioning becomes a dilemmatic problem. In this paper, a distortionless phase digital beamforming (DBF) algorithm with self-calibration antenna arrays is proposed, which enables to obtain distortionless carrier phase while suppressing jamming. Additionally, architecture of high precision Beidou receiver based on anti-jamming antenna arrays is proposed. Finally, the performance of the algorithm is evaluated, including antenna calibration accuracy, carrier phase distortionless accuracy, and carrier phase measurement accuracy without jamming. Meanwhile, the maximal jamming to signal ratio (JSR) and real time kinematic (RTK) positioning accuracy under wideband jamming are also investigated. The experimental results based on the real-life Beidou signals show that the proposed method has an excellent performance for precise relative positioning under jamming when compared with other anti-jamming methods.
In this paper, a feature selection method for determining input parameters in antenna modeling is proposed. In antenna modeling, the input feature of artificial neural network (ANN) is geometric parameters. The selection criteria contain correlation and sensitivity between the geometric parameter and the electromagnetic (EM) response. Maximal information coefficient (MIC), an exploratory data mining tool, is introduced to evaluate both linear and nonlinear correlations. The EM response range is utilized to evaluate the sensitivity. The wide response range corresponding to varying values of a parameter implies the parameter is highly sensitive and the narrow response range suggests the parameter is insensitive. Only the parameter which is highly correlative and sensitive is selected as the input of ANN, and the sampling space of the model is highly reduced. The modeling of a wideband and circularly polarized antenna is studied as an example to verify the effectiveness of the proposed method. The number of input parameters decreases from 8 to 4. The testing errors of |S11| and axis ratio are reduced by 8.74% and 8.95%, respectively, compared with the ANN with no feature selection.
To solve the problem that multiple missiles should simultaneously attack unmeasurable maneuvering targets, a guidance law with temporal consistency constraint based on the super-twisting observer is proposed. Firstly, the relative motion equations between multiple missiles and targets are established, and the topological model among multiple agents is considered. Secondly, based on the temporal consistency constraint, a cooperative guidance law for simultaneous arrival with finite-time convergence is derived. Finally, the unknown target maneuvering is regarded as bounded interference. Based on the second-order sliding mode theory, a super-twisting sliding mode observer is devised to observe and track the bounded interference, and the stability of the observer is proved. Compared with the existing research, this approach only needs to obtain the sliding mode variable which simplifies the design process. The simulation results show that the designed cooperative guidance law for maneuvering targets achieves the expected effect. It ensures successful cooperative attacks, even when confronted with strong maneuvering targets.
To solve the problem that the existing situation awareness research focuses on multi-sensor data fusion, but the expert knowledge is not fully utilized, a heterogeneous information fusion recognition method based on belief rule structure is proposed. By defining the continuous probabilistic hesitation fuzzy linguistic term sets (CPHFLTS) and establishing CPHFLTS distance measure, the belief rule base of the relationship between feature space and category space is constructed through information integration, and the evidence reasoning of the input samples is carried out. The experimental results show that the proposed method can make full use of sensor data and expert knowledge for recognition. Compared with the other methods, the proposed method has a higher correct recognition rate under different noise levels.
To solve the problem of risk identification and quantitative assessment for human-computer interaction (HCI) in complex avionics systems, an HCI safety analysis framework based on system-theoretical process analysis (STPA) and cognitive reliability and error analysis method (CREAM) is proposed. STPA-CREAM can identify unsafe control actions and find the causal path during the interaction of avionics systems and pilot with the help of formal verification tools automatically. The common performance conditions (CPC) of avionics systems in the aviation environment is established and a quantitative analysis of human failure is carried out. Taking the head-up display (HUD) system interaction process as an example, a case analysis is carried out, the layered safety control structure and formal model of the HUD interaction process are established. For the interactive behavior “Pilots approaching with HUD”, four unsafe control actions and 35 causal scenarios are identified and the impact of common performance conditions at different levels on the pilot decision model are analyzed. The results show that HUD’s HCI level gradually improves as the scores of CPC increase, and the quality of crew member cooperation and time sufficiency of the task is the key to its HCI. Through case analysis, it is shown that STPA-CREAM can quantitatively assess the hazards in HCI and identify the key factors that impact safety.
Dempster-Shafer evidence theory is broadly employed in the research of multi-source information fusion. Nevertheless, when fusing highly conflicting evidence it may produce counterintuitive outcomes. To address this issue, a fusion approach based on a newly defined belief exponential divergence and Deng entropy is proposed. First, a belief exponential divergence is proposed as the conflict measurement between evidences. Then, the credibility of each evidence is calculated. Afterwards, the Deng entropy is used to calculate information volume to determine the uncertainty of evidence. Then, the weight of evidence is calculated by integrating the credibility and uncertainty of each evidence. Ultimately, initial evidences are amended and fused using Dempster’s rule of combination. The effectiveness of this approach in addressing the fusion of three typical conflict paradoxes is demonstrated by arithmetic examples. Additionally, the proposed approach is applied to aerial target recognition and iris dataset-based classification to validate its efficacy. Results indicate that the proposed approach can enhance the accuracy of target recognition and effectively address the issue of fusing conflicting evidences.
Multi-objective optimization (MOO) for the microwave metamaterial absorber (MMA) normally adopts evolutionary algorithms, and these optimization algorithms require many objective function evaluations. To remedy this issue, a surrogate-based MOO algorithm is proposed in this paper where Kriging models are employed to approximate objective functions. An efficient sampling strategy is presented to sequentially capture promising samples in the design region for exact evaluations. Firstly, new sample points are generated by the MOO on surrogate models. Then, new samples are captured by exploiting each objective function. Furthermore, a weighted sum of the improvement of hypervolume (IHV) and the distance to sampled points is calculated to select the new sample. Compared with two well-known MOO algorithms, the proposed algorithm is validated by benchmark problems. In addition, two broadband MMAs are applied to verify the feasibility and efficiency of the proposed algorithm.