A mathematical model to determine the optimal production lot size for a deteriorating production system under an extended product inspection policy is developed. The last-K product inspection policy is considered so that the nonconforming items can be reduced, under which the last K products in a production lot are inspected and the nonconforming items from those inspected are reworked. Consider that the products produced towards the end of a production lot are more likely to be nonconforming, is proposed an extended product inspection policy for a deteriorating production system. That is, in a production lot, product inspections are performed among the middle K1 items and after inspections, all of the last K2 products are directly reworked without inspections. Our objective here is the joint optimization of the production lot size and the corresponding extended inspection policy such that the expected total cost per unit time is minimized. Since there is no closed form expression for our optimal policy, the existence for the optimal production inspection policy and an upper bound for the optimal lot size are obtained. Furthermore, an efficient solution procedure is provided to search for the optimal policy. Finally, numerical examples are given to illustrate the proposed model and indicate that the expected total cost per unit time of our product inspection model is less than that of the last-K inspection policy.
A generalized multiple-mode prolate spherical wave functions (PSWFs) multi-carrier with index modulation approach is proposed with the purpose of improving the spectral efficiency of PSWFs multi-carrier systems. The proposed method, based on the optimized multi-index modulation, does not limit the number of signals in the first and second constellations and abandons the concept of limiting the number of signals in different constellations. It successfully increases the spectrum efficiency of the system while expanding the number of modulation symbol combinations and the index dimension of PSWFs signals. The proposed method outperforms the PSWFs multi-carrier index modulation method based on optimized multiple indexes in terms of spectrum efficiency, but at the expense of system computational complexity and bit error performance. For example, with $n $=10 subcarriers and a bit error rate of 1×10?5, spectral efficiency can be raised by roughly 12.4%.
Sensors deployment optimization has become one of the most attractive fields in recent years. However, most of the previous work focused on the deployment problem in 2D space. Compared to the traditional form, sensors deployment in multidimensional space has greater research significance and practical potential to satisfy the detecting needs in complex environment. Aiming at solving this issue, a multi-dimensional space sensor network model is established, and the radar system is selected as an example. Considering the possible working mode of the radar system (e.g., searching and tracking), two distinctive deployment models are proposed based on maximum coverage area and maximum target detection probability in the attack direction respectively. The latter one is usually ignored in the previous literature. For uncovering the optimal deployment of the sensor network, the particle swarm optimization (PSO) algorithm is improved using the proposed weights determination scheme, in which the linear decreasing, the pooling strategy and the cloud theory are combined for weights updating. Experimental results illustrate the effectiveness of the proposed method.
A new hybrid Freeman/eigenvalue decomposition based on the orientation angle compensation and the various extended volume models for polarimetric synthetic aperture radar (PolSAR) data are presented. There are three steps in the novel version of the three-component model-based decomposition. Firstly, two special unitary transform matrices are applied on the coherency matrix for deorientation to decrease the correlation between the co-polarized term and the cross-polarized term. Secondly, two new conditions are proposed to distinguish the manmade structures and the nature media after the orientation angle compensation. Finally, in order to adapt to the scattering properties of different media, five different volume scattering models are used to decompose the coherency matrix. These new conditions pre-resolves man-made structures, which is beneficial to the subsequent selection of a more suitable volume scattering model. Fully PolSAR data on San Francisco are used in the experiments to prove the efficiency of the proposed hybrid Freeman/eigenvalue decomposition.
A schlieren detection algorithm is proposed for the ground-to-air background oriented schlieren (BOS) system to achieve high-speed airplane shock waves visualization. The proposed method consists of three steps. Firstly, image registration is incorporated for reducing errors caused by the camera motion. Then, the background subtraction dual-model single Gaussian model (BS-DSGM) is proposed to build a precise background model. The BS-DSGM could prevent the background model from being contaminated by the shock waves. Finally, the twodimensional orthogonal discrete wavelet transformation is used to extract schlieren information and averaging schlieren data. Experimental results show our proposed algorithm is able to detect the aircraft in-flight and to extract the schlieren information. The precision of schlieren detection algorithm is 0.96. Three image quality evaluation indices are chosen for quantitative analysis of the shock waves visualization. The white Gaussian noise is added in the frames to validate the robustness of the proposed algorithm. Moreover, we adopt two times and four times down sampling to simulate different imaging distances for revealing how the imaging distance affects the schlieren information in the BOS system.
The problem of scheduling radar dwells in multifunction phased array radar systems is addressed. A novel dwell scheduling algorithm is proposed. The whole scheduling process is based on an online pulse interleaving technique. It takes the system timing and energy constraints into account. In order to adapt the dynamic task load, the algorithm considers both the priorities and deadlines of tasks. The simulation results demonstrate that compared with the conventional adaptive dwell scheduling algorithm, the proposed one can improve the task drop rate and system resource utility effectively.
The rapid development of data communication in modern era demands secure exchange of information. Steganography is an established method for hiding secret data from an unauthorized access into a cover object in such a way that it is invisible to human eyes. The cover object can be image, text, audio, or video. This paper proposes a secure steganography algorithm that hides a bitstream of the secret text into the least significant bits (LSBs) of the approximation coefficients of the integer wavelet transform (IWT) of grayscale images as well as each component of color images to form stego-images. The embedding and extracting phases of the proposed steganography algorithms are performed using the MATLAB software. Invisibility, payload capacity, and security in terms of peak signal to noise ratio (PSNR) and robustness are the key challenges to steganography. The statistical distortion between the cover images and the stego-images is measured by using the mean square error (MSE) and the PSNR, while the degree of closeness between them is evaluated using the normalized cross correlation (NCC). The experimental results show that, the proposed algorithms can hide the secret text with a large payload capacity with a high level of security and a higher invisibility. Furthermore, the proposed technique is computationally efficient and better results for both PSNR and NCC are achieved compared with the previous algorithms.
Test of consistency is critical for the analytic hierarchy process (AHP) methodology. When a pairwise comparison matrix (PCM) fails the consistency test, the decision maker (DM) needs to make revisions. The state of the art focuses on changing a single entry or creating a new matrix based on the original inconsistent matrix so that the modified matrix can satisfy the consistency requirement. However, we have noticed that the reason that causes inconsistency is not only numerical inconsistency, but also logical inconsistency, which may play a more important role in the whole inconsistency. Therefore, to realize satisfactory consistency, first of all, we should change some entries that form a directed circuit to make the matrix logically consistent, and then adjust other entries within acceptable deviations to make the matrix numerically consistent while preserving most of the original comparison information. In this paper, we firstly present some definitions and theories, based on which two effective methods are provided to identify directed circuits. Four optimization models are proposed to adjust the original inconsistent matrix. Finally, illustrative examples and comparison studies show the effectiveness and feasibility of our method.
CONTENTS
Prior research on the resilience of critical infrastructure usually utilizes the network model to characterize the structure of the components so that a quantitative representation of resilience can be obtained. Particularly, network component importance is addressed to express its significance in shaping the resilience performance of the whole system. Due to the intrinsic complexity of the problem, some idealized assumptions are exerted on the resilience-optimization problem to find partial solutions. This paper seeks to exploit the dynamic aspect of system resilience, i.e., the scheduling problem of link recovery in the post-disruption phase. The aim is to analyze the recovery strategy of the system with more practical assumptions, especially inhomogeneous time cost among links. In view of this, the presented work translates the resilience-maximization recovery plan into the dynamic decisionmaking of runtime recovery option. A heuristic scheme is devised to treat the core problem of link selection in an ongoing style. Through Monte Carlo simulation, the link recovery order rendered by the proposed scheme demonstrates excellent resilience performance as well as accommodation with uncertainty caused by epistemic knowledge.
The wheel brake system safety is a complex problem which refers to its technical state, operating environment, human factors, etc., in aircraft landing taxiing process. Usually, professors consider system safety with traditional probability techniques based on the linear chain of events. However, it could not comprehensively analyze system safety problems, especially in operating environment, interaction of subsystems, and human factors. Thus, we consider system safety as a control problem based on the system-theoretic accident model, the processes (STAMP) model and the system theoretic process analysis (STPA) technique to compensate the deficiency of traditional techniques. Meanwhile, system safety simulation is considered as system control simulation, and Monte Carlo methods are used which consider the range of uncertain parameters and operation deviation to quantitatively study system safety influence factors in control simulation. Firstly, we construct the STAMP model and STPA feedback control loop of the wheel brake system based on the system functional requirement. Then four unsafe control actions are identified, and causes of them are analyzed. Finally, we construct the Monte Carlo simulation model to analyze different scenarios under disturbance. The results provide a basis for choosing corresponding process model variables in constructing the context table and show that appropriate brake strategies could prevent hazards in aircraft landing taxiing.
Measuring the business-IT alignment (BITA) of an organization determines its alignment level, provides directions for further improvements, and consequently promotes the organizational performances. Due to the capabilities of enterprise architecture (EA) in interrelating different business/IT viewpoints and elements, the development of EA is superior to support BITA measurement. Extant BITA measurement literature is sparse when it concerns EA. The literature tends to explain how EA viewpoints or models correlate with BITA, without discussing where to collect and integrate EA data. To address this gap, this paper attempts to propose a specific BITA measurement process through associating a BITA maturity model with a famous EA framework: DoD Architectural Framework 2.0 (DoDAF2.0). The BITA metrics in the maturity model are connected to the meta-models and models of DoDAF2.0. An illustrative ArchiSurance case is conducted to explain the measurement process. Systematically, this paper explores the process of BITA measurement from the viewpoint of EA, which helps to collect the measurement data in an organized way and analyzes the BITA level in the phase of architecture development.
Unmanned aerial vehicles (UAVs) may play an important role in data collection and offloading in vast areas deploying wireless sensor networks, and the UAV's action strategy has a vital influence on achieving applicability and computational complexity. Dynamic programming (DP) has a good application in the path planning of UAV, but there are problems in the applicability of special terrain environment and the complexity of the algorithm. Based on the analysis of DP, this paper proposes a hierarchical directional DP (DDP) algorithm based on direction determination and hierarchical model. We compare our methods with Q-learning and DP algorithm by experiments, and the results show that our method can improve the terrain applicability, meanwhile greatly reduce the computational complexity.
Traditional multi-band frequency selective surface (FSS) approaches are hard to achieve a perfect resonance response in a wide band due to the limit of the onset grating lobe frequency determined by the array. To solve this problem, an approach of combining elements in different period to build a hybrid array is presented. The results of series of numerical simulation show that multi-periodicity combined element FSS, which are designed using this approach, usually have much weaker grating lobes than the traditional FSS. Furthermore, their frequency response can be well predicted through the properties of their member element FSS. A prediction method for estimating the degree of expected grating lobe energy loss in designing multi-band FSS using this approach is provided.
In many practical applications of image segmentation problems, employing prior information can greatly improve segmentation results. This paper continues to study one kind of prior information, called prior distribution. Within this research, there is no exact template of the object; instead only several samples are given. The proposed method, called the parametric distribution prior model, extends our previous model by adding the training procedure to learn the prior distribution of the objects. Then this paper establishes the energy function of the active contour model (ACM) with consideration of this parametric form of prior distribution. Therefore, during the process of segmenting, the template can update itself while the contour evolves. Experiments are performed on the airplane data set. Experimental results demonstrate the potential of the proposed method that with the information of prior distribution, the segmentation effect and speed can be both improved efficaciously.
Focusing on obstacle avoidance in three-dimensional space for unmanned aerial vehicle (UAV), the direct obstacle avoidance method in dynamic space based on three-dimensional velocity obstacle spherical cap is proposed, which quantifies the influence of threatening obstacles through velocity obstacle spherical cap parameters. In addition, the obstacle avoidance schemes of any point on the critical curve during the multi-obstacles avoidance are given. Through prediction, the insertion point for the obstacle avoidance can be obtained and the flight path can be replanned. Taking the Pythagorean Hodograph (PH) curve trajectory re-planning as an example, the three-dimensional direct obstacle avoidance method in dynamic space is tested. Simulation results show that the proposed method can realize the online obstacle avoidance trajectory re-planning, which increases the flexibility of obstacle avoidance greatly.
Extensive experiments suggest that kurtosis-based fingerprint features are effective for specific emitter identification (SEI). Nevertheless, the lack of mechanistic explanation restricts the use of fingerprint features to a data-driven technique and further reduces the adaptability of the technique to other datasets. To address this issue, the mechanism how the phase noise of high-frequency oscillators and the nonlinearity of power amplifiers affect the kurtosis of communication signals is investigated. Mathematical models are derived for intentional modulation (IM) and unintentional modulation (UIM). Analysis indicates that the phase noise of high-frequency oscillators and the nonlinearity of power amplifiers affect the kurtosis frequency and amplitude, respectively. A novel SEI method based on frequency and amplitude of the signal kurtosis (FA-SK) is further proposed. Simulation and real-world experiments validate theoretical analysis and also confirm the efficiency and effectiveness of the proposed method.
A proper weapon system is very important for a national defense system. Generally, it means selecting the optimal weapon system among many alternatives, which is a multipleattribute decision making (MADM) problem. This paper proposes a new mathematical model based on the response surface method (RSM) and the grey relational analysis (GRA). RSM is used to obtain the experimental points and analyze the factors that have a significant impact on the selection results. GRA is used to analyze the trend relationship between alternatives and reference series. And then an RSM model is obtained, which can be used to calculate all alternatives and obtain ranking results. A real world application is introduced to illustrate the utilization of the model for the weapon selection problem. The results show that this model can be used to help decision-makers to make a quick comparison of alternatives and select a proper weapon system from multiple alternatives, which is an effective and adaptable method for solving the weapon system selection problem.
In order to solve the problem of ambiguous acquisition of BOC signals caused by its property of multiple peaks, an unambiguous acquisition algorithm named reconstruction of sub cross-correlation cancellation technique (RSCCT) for BOC(kn, n) signals is proposed. In this paper, the principle of signal decomposition is combined with the traditional acquisition algorithm structure, and then based on the method of reconstructing the correlation function. The method firstly gets the sub-pseudorandom noise (PRN) code by decomposing the local PRN code, then uses BOC(kn, n) and the sub-PRN code cross-correlation to get the sub cross-correlation function. Finally, the correlation peak with a single peak is obtained by reconstructing the sub cross-correlation function so that the ambiguities of BOC acquisition are removed. The simulation shows that RSCCT can completely eliminate the side peaks of BOC (kn, n) group signals while maintaining the narrow correlation of BOC, and its computational complexity is equivalent to sub carrier phase cancellation (SCPC) and autocorrelation side-peak cancellation technique (ASPeCT), and it reduces the computational complexity relative to BPSK-like. For BOC(n, n), the acquisition sensitivity of RSCCT is 3.25 dB, 0.81 dB and 0.25 dB higher than binary phase shift keying (BPSK)-like, SCPC and ASPeCT at the acquisition probability of 90%, respectively. The peak to average power ratio is 1.91, 3.0 and 3.7 times higher than ASPeCT, SCPC and BPSK-like at SNR = – 20 dB, respectively. For BOC(2n, n), the acquisition sensitivity of RSCCT is 5.5 dB, 1.25 dB and 2.69 dB higher than BPSK-like, SCPC and ASPeCT at the acquisition probability of 90%, respectively. The peak to average power ratio is 1.02, 1.68 and 2.12 times higher than ASPeCT, SCPC and BPSK-like at SNR = – 20 dB, respectively.
Beyond-visual-range (BVR) air combat threat assessment has attracted wide attention as the support of situation awareness and autonomous decision-making. However, the traditional threat assessment method is flawed in its failure to consider the intention and event of the target, resulting in inaccurate assessment results. In view of this, an integrated threat assessment method is proposed to address the existing problems, such as overly subjective determination of index weight and imbalance of situation. The process and characteristics of BVR air combat are analyzed to establish a threat assessment model in terms of target intention, event, situation, and capability. On this basis, a distributed weight-solving algorithm is proposed to determine index and attribute weight respectively. Then, variable weight and game theory are introduced to effectively deal with the situation imbalance and achieve the combination of subjective and objective. The performance of the model and algorithm is evaluated through multiple simulation experiments. The assessment results demonstrate the accuracy of the proposed method in BVR air combat, indicating its potential practical significance in real air combat scenarios.
To solve discrete optimization difficulty of the spectrum allocation problem, a membrane-inspired quantum shuffled frog leaping (MQSFL) algorithm is proposed. The proposed MQSFL algorithm applies the theory of membrane computing and quantum computing to the shuffled frog leaping algorithm, which is an effective discrete optimization algorithm. Then the proposed MQSFL algorithm is used to solve the spectrum allocation problem of cognitive radio systems. By hybridizing the quantum frog colony optimization and membrane computing, the quantum state and observation state of the quantum frogs can be well evolved within the membrane structure. The novel spectrum allocation algorithm can search the global optimal solution within a reasonable computation time. Simulation results for three utility functions of a cognitive radio system are provided to show that the MQSFL spectrum allocation method is superior to some previous spectrum allocation algorithms based on intelligence computing.
The realization of the parameter estimation of chirp signals using the fractional Fourier transform (FRFT) is based on the assumption that the sampling duration of practical observed signals would be equal to the time duration of chirp signals contained in the former. However, in many actual circumstances, this assumption seems unreasonable. On the basis of analyzing the practical signal form, this paper derives the estimation error of the existing parameter estimation method and then proposes a novel and universal parameter estimation algorithm. Furthermore, the proposed algorithm is developed which allows the estimation of the practical observed Gaussian windowed chirp signal. Simulation results show that the new algorithm works well.
With the development of global position system (GPS), wireless technology and location aware services, it is possible to collect a large quantity of trajectory data. In the field of data mining for moving objects, the problem of anomaly detection is a hot topic. Based on the development of anomalous trajectory detection of moving objects, this paper introduces the classical trajectory outlier detection (TRAOD) algorithm, and then proposes a density-based trajectory outlier detection (DBTOD) algorithm, which compensates the disadvantages of the TRAOD algorithm that it is unable to detect anomalous defects when the trajectory is local and dense. The results of employing the proposed algorithm to Elk1993 and Deer1995 datasets are also presented, which show the effectiveness of the algorithm.
The classic data envelopment analysis (DEA) model is used to evaluate decision-making units' (DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new researches begin to focus on the efficiency analysis of non-homogeneous DMU arose by real practices such as the evaluation of departments in a university, where departments argue for the adoption of different criteria based on their disciplinary characteristics. A DEA procedure is proposed in this paper to address the efficiency analysis of two non-homogeneous DMU groups. Firstly, an analytical framework is established to compromise diversified input and output (IO) criteria from two nonhomogenous groups. Then, a criteria fusion operation is designed to obtain different DEA analysis strategies. Meanwhile, Friedman test is introduced to analyze the consistency of all efficiency results produced by different strategies. Next, ordered weighted averaging (OWA) operators are applied to integrate different information to reach final conclusions. Finally, a numerical example is used to illustrate the proposed method. The result indicates that the proposed method relaxes the restriction of the classical DEA model, and can provide more analytical flexibility to address different decision analysis scenarios arose from practical applications.
Nonlinearity and implicitness are common degradation features of the stochastic degradation equipment for prognostics. These features have an uncertain effect on the remaining useful life (RUL) prediction of the equipment. The current data-driven RUL prediction method has not systematically studied the nonlinear hidden degradation modeling and the RUL distribution function. This paper uses the nonlinear Wiener process to build a dual nonlinear implicit degradation model. Based on the historical measured data of similar equipment, the maximum likelihood estimation algorithm is used to estimate the fixed coefficients and the prior distribution of a random coefficient. Using the on-site measured data of the target equipment, the posterior distribution of a random coefficient and actual degradation state are step-by-step updated based on Bayesian inference and the extended Kalman filtering algorithm. The analytical form of the RUL distribution function is derived based on the first hitting time distribution. Combined with the two case studies, the proposed method is verified to have certain advantages over the existing methods in the accuracy of prediction.
It is a necessary step to estimate the spreading sequence of direct sequence spread spectrum (DSSS) signal for blind despreading and demodulation in non-cooperative communications. Two innovative and effective detection statistics are proposed to implement the synchronization and spreading sequence estimation procedure. The proposed algorithm also has a low computational complexity with only linear additions and modifications. Theoretical analysis and simulation results show that the algorithm performs quite well in low SNR environment, and is much better than all the existing typical algorithms with a comprehensive consideration both in performance and computational complexity.
According to the Doppler sensitive of the phase coded pulse compression signal, a Doppler estimating and compensating method based on phase is put forward to restrain the Doppler sidelobes, raise the signal-to-noise ratio and improve measuring resolution. The compensation method is used to decompose the echo to amplitude and phase, and then compose the new compensated echo by the amplitude and the nonlinear component of the phase. Furthermore the linear component of the phase can be used to estimate the Doppler frequency shift. The computer simulation and the real data processing show that the method has accurately estimated the Doppler frequency shift, successfully restrained the energy leakage on spectrum, greatly increased the echo signal-to-noise ratio and improved the detection performance of the radio system in both time domain and frequency domain.
A novel approach for engineering application to human error probability quantification is presented based on an overview of the existing human reliability analysis methods. The set of performance shaping factors is classified as two subsets of dominant factors and adjusting factors respectively. Firstly, the dominant factors are used to determine the probabilities of three behavior modes. The basic probability and its interval of human error for each behavior mode are given. Secondly, the basic probability and its interval are modified by the adjusting factors, and the total probability of human error is calculated by a total probability formula. Finally, a simple example is introduced, and the consistency and validity of the presented approach are illustrated.
Combining beamlet transform with steerable filters, a new edge detection method based on line gradient is proposed. Compared with operators based on point local properties, the edge-detection results with this method achieve higher SNR and position accuracy, and are quite helpful for image registration, object identification, etc. Some edge-detection experiments on optical and SAR images that demonstrate the significant improvement over classical edge operators are also presented. Moreover, the template matching result based on edge information of optical reference image and SAR image also proves the validity of this method.
Discrete event system (DES) models promote system engineering, including system design, verification, and assessment. The advancement in manufacturing technology has endowed us to fabricate complex industrial systems. Consequently, the adoption of advanced modeling methodologies adept at handling complexity and scalability is imperative. Moreover, industrial systems are no longer quiescent, thus the intelligent operations of the systems should be dynamically specified in the model. In this paper, the composition of the subsystem behaviors is studied to generate the complexity and scalability of the global system model, and a Boolean semantic specifying algorithm is proposed for generating dynamic intelligent operations in the model. In traditional modeling approaches, the change or addition of specifications always necessitates the complete resubmission of the system model, a resource-consuming and error-prone process. Compared with traditional approaches, our approach has three remarkable advantages: (i) an established Boolean semantic can be fitful for all kinds of systems; (ii) there is no need to resubmit the system model whenever there is a change or addition of the operations; (iii) multiple specifying tasks can be easily achieved by continuously adding a new semantic. Thus, this general modeling approach has wide potential for future complex and intelligent industrial systems.
Detecting the forgery parts from a double compressed image is very important and urgent work for blind authentication. A very simple and efficient method for accomplishing the task is proposed. Firstly, the probabilistic model with periodic effects in double quantization is analyzed, and the probability of quantized DCT coefficients in each block is calculated over the entire image. Secondly, the posteriori probability of each block is computed according to Bayesian theory and the results mentioned in first part. Then the mean and variance of the posteriori probability are to be used for judging whether the target block is tampered. Finally, the mathematical morphology operations are performed to reduce the false alarm probability. Experimental results show that the method can exactly locate the doctored part, and through the experiment it is also found that for detecting the tampered regions, the higher the second compression quality is, the more exact the detection efficiency is.
A neural-network-based adaptive gain scheduling backstepping sliding mode control (NNAGS-BSMC) approach for a class of uncertain strict-feedback nonlinear system is proposed. First, the control problem of uncertain strict-feedback nonlinear systems is formulated. Second, the detailed design of NNAGSBSMC is described. The sliding mode control (SMC) law is designed to track a referenced output via backstepping technique. To decrease chattering result from SMC, a radial basis function neural network (RBFNN) is employed to construct the NNAGSBSMC to facilitate adaptive gain scheduling, in which the gains are scheduled adaptively via neural network (NN), with sliding surface and its differential as NN inputs and the gains as NN outputs. Finally, the verification example is given to show the effectiveness and robustness of the proposed approach. Contrasting simulation results indicate that the NNAGS-BSMC decreases the chattering effectively and has better control performance against the BSMC.
To track the nonlinear, non-Gaussian bearings-only maneuvering target accurately online, the constrained auxiliary particle filtering (CAPF) algorithm is presented. To restrict the samples into the feasible area, the soft measurement constraints are implemented into the update routine via the $\ell$1 regularization. Meanwhile, to enhance the sampling diversity and efficiency, the target kinetic features and the latest observations are involved into the evolution. To take advantage of the past and the current measurement information simultaneously, the sub-optimal importance distribution is constructed as a Gaussian mixture consisting of the original and modified priors with the fuzzy weighted factors. As a result, the corresponding weights are more evenly distributed, and the posterior distribution of interest is approximated well with a heavier tailor. Simulation results demonstrate the validity and superiority of the CAPF algorithm in terms of efficiency and robustness.