IET International Radar Conference (IRC 2018)
Open Access

Visual time-sensitive SAR target detection technology based on human brain mapping

Kuiying Yin

Kuiying Yin

Nanjing Research Institute of Electronics Technology, Nanjing, People's Republic of China

Search for more papers by this author
Qixue Li

Corresponding Author

Qixue Li

Nanjing Research Institute of Electronics Technology, Nanjing, People's Republic of China

Search for more papers by this author
Jimin Liang

Jimin Liang

School of Electrical Engineering, Xidian University, No. 2 Taibai South Road, Xi'an, People's Republic of China

Search for more papers by this author
Chuan Liu

Chuan Liu

Nanjing Research Institute of Electronics Technology, Nanjing, People's Republic of China

Search for more papers by this author
Chang Niu

Chang Niu

Nanjing Research Institute of Electronics Technology, Nanjing, People's Republic of China

Search for more papers by this author
Zhongbao Wang

Zhongbao Wang

Nanjing Research Institute of Electronics Technology, Nanjing, People's Republic of China

Search for more papers by this author
First published: 25 September 2019

Abstract

To detect targets from synthetic aperture radar (SAR) images, this study describes a visual time-sensitive SAR target detection technology based on human brain mapping. Expert brain central responding messages are introduced into the automatic analysis method of SAR images. A fusing detection model between brain central responding messages and features of SAR targets is built by expert on-line brain–machine combined modelling. The experimental results show that the technology paper proposed succeeds in automatically detecting and recognising targets of SAR images in actual combat environment, and owns better detection results and higher detection efficiency than DARPA's work. In conclusion, this study proposes an ideal man–machine fusion technology which is worthy of high research value and broad application prospect.

1 Introduction

The appearance of synthetic aperture radar (SAR) changes the history of target's ‘point-like’ detection from radar. It has become the milestone in radar development history, thanks to the ability to image and recognise targets in high-resolution ratio. Compared to visible light and infrared ray imaging, it is free from the effect of day and night or climate and can operate in all-time and all-weather situation. Therefore, SAR owns wide application both for military and civil use. For military use, it can be applied in all-time and all-weather dynamic investigation, military survey and assessment of strike effect aimed at hot spots, thus enhancing the investigation ability; for civil use, it can be applied in territorial resources surveys, assessments of disasters, ocean research and environment monitors, serving the national economy well.

Compared to the developments of SAR system research at home and abroad, though the detection and recognition method of SAR targets has gained research widely, it still lacks practical detection and recognition technology, and nearly fails to transform vast image data into targets’ information rapidly and efficiently [1, 2]. This issue has become the bottleneck of SAR applications, especially in China. For instance, most target detection methods aim at relatively simple scenes such as warships target detection in sea surface background, which are hard to fit in complex ground object scenes; most target recognition research supposed to have complete training sample sets, yet they are hard to fit in non-cooperative target recognition. Furthermore, the detection and recognition according to ground vehicle targets has many difficult problems to solve such as ground vehicle target detection in complex ground object scenes, identification between targets and complex man-made clutter and target recognition without complete training sample sets because of complex scenes, variant targets and difficulties to gain training samples [3].

In this paper, we review the defense advanced research projects agency (DARPA) brain mapping work in the past decade, and propose our combined SAR target detection method based on rotational invariance rapid detection method and electroencephalograph (EEG) brain mapping method, and show the further experimental results, which verify the validity of our method.

2 Methodology

2.1 DARPA brain mapping detection method of SAR image targets

A human can recognise various targets in sight rapidly and accurately, thanks to brain's strong recognition functions. Neural scientists have found that the image targets are transformed into neural electrical signals via retina's backside light sensors after being projected to retina and these neural electrical signals are sent to brain's primary visual cortex via optic neural fibres. The primary visual cortex separates these signals into several parts initially and sends them to other relative parts in brain for further processing according to different contents collaterally(Bear et al. 2007) [4]. The research has shown that the perception and processing of image targets in the brain involve perception, memory, thinking and recognition functions. These functions have intimate relationships with neural activities in brain cortex's occipital lobe, central–parietal lobe, central–frontal lobe, superior and retral temporal lobe (Luck and Kappenman 2012).

The research which uses functional magnetic resonance imaging (fMRI), EEG and magneto encephalogram (MEG) modern non-invasive technology on the visual brain shows that the features of both the brain's activated parts (fMRI) and collected electromagnetic physiological signal (EEG/MEG) are different when they are stimulated by two or more different contents of images. These differences are external macro performances of brain's internal mechanism about targets’ feature processing (Volkmerz 2005, Hu et al. 2010, Luck and Kappenman 2012, Matran-Fernandez and Poli 2017). These findings, on one hand, provide us with new means to understand and discover brain's visual recognition process, on the other hand, encourage us to think deeply about how to utilise brain's such excellent functions to enhance the performance of target detection method.

DARPA has sponsored many image analysis research based on neural science in past decade. The key to using brain mapping technology to analyse satellite images is the study of method feasibility. The main process is shown as Fig. 1 : cut original satellite images into pieces and confirm the existence of dangerous targets by analysing the brain mapping of image judges.

Details are in the caption following the image

DAPRA SAR target detection method based on brain mapping EEG signal

DARPA uses these advanced technologies to collect and analyse the brain's signals under specific tasks accurately and thus building relevant models, which could enhance the recognition and processing accuracy of computer according to specific images. Yet it is quite difficult to apply this method in dealing with vast data, despite being sliced into pieces.

2.2 Our method

Our research proposes an automatic target detection method of SAR image. The method utilises learning techniques from brain and machine, that is, combines machine's fast calculation function and brain's accurate responding messages together. We first remove more than 99.9% of non-feature areas by machine detection method. Then slice the suspectible areas into pieces and send them to image judges as inputs. Finally, take brains as sensors to complete the identification of target group. This method takes full advantage of the combination of neural science and computer calculation; in the meanwhile, the amount of data which calls for man's handling is far less than one-thousandth of original image data, thus apparently enhancing the efficiency.

On the basis of the analysis above, our brief method is: first, use traditional detection methods of SAR targets (constant false-alarm rate (CFAR), ultra-pixel segmentation, saliency-based visual attention detection etc.) to detect targets from large areas of image roughly. Next, divide original images into suspectible targets and background slices; finally, classify these slices into target group and non-target group by fusing image's features and expert on-line brain responding features, thus realising targets’ automatic detection.

Since our guideline is to study further step by step, we initially study on expert on-line image features brain responding fusion method. On the basis of this, we then study on expert offline fusion method further and compare this method with a machine learning method which is based on image features. The whole study process of this paper is shown in Fig. 2 as below.

Details are in the caption following the image

Detection process

2.3 Rapid target slicing method based on rotational invariance

The processing scheme of current automatic detection method is almost the same as man judge, and is divided into three parts normally: first, choose some alternative areas in given images; next, extract features in these regions; finally, use training classifier to separate the areas into groups. Region choosing is for following the target location. Since targets could appear in any location of the image, let alone the uncertainty of sizes and length-to-width ratios, traditional method uses sliding window to traverse the whole image and needs to set different sizes and length-to-width ratios as mentioned before. Although this method of exhaustion contains all possible locations where targets will appear, the weakness is apparent: too high time complexity and too many redundant windows, which have serious effect on the speed and performance of the following features extraction and selection.

Features detection is aimed at dealing with the diversity of targets’ shape, pose, light and background variations, hoping to extract robust features of targets for following target/background selection. Traditional target detection method has some main problems: one is the regional choosing method based on sliding windows has no pertinence but high time complexity and redundant windows; the other is manual design features lack robustness when adapting to targets and background diversity's variances.

In recent years, target detection methods based on deep learning have gained great progress by the rapid development of deep learning technology. The methods can be roughly divided into two parts: methods based on region proposal (region-convolutional neural net (R-CNN), spatial pyramid pooling-NET (SPP-NET), fast R-CNN, faster R-CNN etc.) and methods based on regression (you only look once (YOLO), single shot multibox detector (SDD) etc.). These methods have gained prominent performance on natural image data sets such as Image Net and PASCAL VOC. Methods based on deep learning usually need quite large training samples. However, since SAR images fail to meet this need, it makes against the direct applications of deep learning methods. Therefore, the target detection method of SAR image based on deep learning still needs further research.

Rotational invariance target slicing detection method falls into several parts: first, fit grey values, variances, peak value counts and gradients features of detected targets together and utilises local CFAR in high possibility areas by using constructing image feature space, that is, simple integral computation. Next, exclude more than 90% of non-target areas and has possible areas. Finally, use multi-feature to select possible areas once more. Since this method uses linear algorithm, the speed is fast. The only weakness is more false alarms with the purpose of finding more targets.

We use rotational invariance operator method which Wu Xiaolin brought up here to select targets in detection process. The advantage of this method is non-sensitive to angle variances and is easy to calculate compared with circle images (Fig. 3).

Details are in the caption following the image

Rotational invariance operator generation schematic graph

Rotational invariance operator can be taken as the result of two squares overlay together. The results in the image can efficiently generate several sets of integral images by simple addition and subtraction. On the basis of this, we can figure out mean value, variance and gradient at a very low calculation price.

We can rapidly exclude more than 90% of regions by the mentioned method and can increase eigenvalue for twice selection if needed. After twice selection, the remaining areas are less than one-thousandth of before, which decreases following calculation of brain mapping greatly.

2.4 Brain mapping recognition of SAR target based on EEG

Images usually appear rapidly in the form of pictures in practise, which calls for rapid and real-time collection of brain's response to pictures. fMRI's very low time resolution ratio is hard to meet the need; however, EEG owns quite good time resolution ratio and can record brain's response signals in real time [5, 6]. We use EEG to analyse brain's response signals according to different image contents, thanks to this EEG's unique property, and then build relevant model to enhance the image target recognition ratio of computer. There are three parts involved in building model using electrogastrography (EGG): (i) find and extract EGG features: we can distinguish the sort of shown image's content very accurately at present; (ii) find and extract image's features: these features are able to embody different images accurately; (iii) learn to get a model that is able to describe more than two sorts of internal connected features in order to fulfil the process, in which we can figure out a much more difficult measured brain responding feature based on relatively easier measured image features. These brain features can embody the sort of image contents quite accurately and then enhance the image target recognition ratio of computer.

3 Results

We use oddball form in present experiment to segment remote sensing image into a series of 400×400 images and select pictures which contain one target (aircrafts or automobiles) or no target (according to remote sensing picture) or only disturbance (according to SAR image) as spare stimulating pictures. Then, select 40 target pictures as deviation stimulation pictures and 160 background pictures or disturbance pictures as standard stimulation pictures to start stimulation. The amount ratio of deviation stimulation pictures and standard ones is 1–4. We define 200 pictures as a block each time, and image judge should finish two blocks of task according to every sort of targets. Each picture is shown for 200 ms and then clear the screen for 1700ms. The interval of two pictures is 1900 ms. The image judge can have a rest for 3–5 min in the interval of two blocks. A block lasts for about 6.5 min, and the flowchart in experiment is shown in Figs. 4-6.

Details are in the caption following the image

Block’s flowchart in the experiment

Details are in the caption following the image

Flowchart in experiment

(a) Deviation stimulation pictures contain tanks, (b) Standard stimulation pictures contain disturbance

Details are in the caption following the image

Brain mapping data collection scene

Since collected EGG original data contains eye movements, muscle movements and electrocardiograph (ECG) electrocardio noise, let alone some reasons from the machine itself, original EEG has baseline drift and other phenomena. That is why we need data pre-processing to exclude noise and calibrate baseline before data analysis. Machines often use high sampling frequency (1 or 2 kHz, even higher) and EGG's meaningful frequency which we study is usually under 100 Hz.

To analyse the following data more conveniently, we need to downsample original data (usually to 256 Hz). Downsampling is also handled in pre-processing. What is more, data pre-processing also includes data's re-reference and filtering process. Our method uses average reference, that is, 0.5–46 Hz band-pass filter to filtering EGG data. The exclusions of electrooculogram (EOG), electromyography (EMG) and ECG noises are handled by independent component analysis (ICA) decomposition. We apply ICA decomposition to the data after downsampling, re-reference and filtering. As a result, we exclude EOG, EMG, ECG and other random noises to get clear EEG data. The operation interface and detection accuracy results of our brain mapping SAR system are shown in Fig. 7 and Table 1.

Details are in the caption following the image

Operation interface of our brain mapping SAR system

Table 1. Detection accuracy results
SAR target groups Accuracy, %
S1 95.0
S2 82.0
S3 87.9
S4 86.5

4 Conclusion

This paper proposes a rapid detection method of SAR targets based on rotational invariance SAR target detection method and brain mapping EGG cascade connections. The method can select suspectible areas of under-matching SAR image, thus excluding plenty of under-matching space by building characteristic space of the image. It greatly decreases the calculation of following brain mapping recognition and takes full advantage of brain–machine working manner. Compared to DARPA's work, our method owns better detection results and higher detection efficiency and proves to be a quite perfect man–machine fusion technology.