SIGINT, TSCM and AI

Last updated 12 Apr 24 @ 15:03 |
[ssba]
A- A+ A

Paul D Turner examines the growing role that AI plays in signals intelligence and technical surveillance countermeasures

Perhaps dangerously overused and misunderstood, AI plays a significant role within SIGINT/TSCM across administrative and deployment-oriented signal analysis, analytics and reporting, as a path to clarity of large intelligence data subsets. AI and Machine Learning (ML) can be deployed to autonomously surface subtle and deeply buried intelligence-bearing signal events and identify trends and activities.
SIGINT/TSCM sub-systems have shifted to embrace machine learning and in reality adopted an emerging measure of artificial-intelligence and/or machine learning technology that can train the platform to detect more relevant signals faster than coded algorithmic responses alone. It perhaps needs to be pointed out that AI and coded-algorithms are heavily integrated processes and still require a profound measure of human operator-intellect to accomplish the task successfully.
When the spectrum is being processed at Terahertz (THz) sweep speeds well into the Millimetre (mmWave) band, as a norm, it is simply beyond the realm of human eye-sight to be able to visually extract all signals of interest. This is where AI can make a difference by detecting illusive signal events that may not be presented to the human operator across a comparatively slow user-interface. AI can process dataset visualisation; including differential vector measurements across a massive amount of mission-critical intelligence, at greater sweep speeds or within a real-time Intermediate-Frequency Broadband (IFB) or IQ streaming mode, becoming a core focus point in the AI/ML discussion.
AI is an umbrella catch-all that can describe a highly organised and standards-based process that can be used for powerful SDR feature development, and for the analysis of comprehensive datasets at a fundamental level – including display-oriented event labeling, emerging feature development and merging of feature components, big dataset generation and management to better facilitate the AI workflow process.
Anomaly detection is generally a long and difficult task for the technical analyst. ML can be used to better train the SIGINT/TSCM platform to recognise existing differential events and more accurately detect subtle anomalies deep within massive spectral datasets across a global collection strategy.
Collection is accomplished by a combination of AI-based machine-learning and predictive logic, across many thousands of perfect, not so perfect, and even very poor-quality samples to achieve a high-degree of accuracy. AI brings promise of more relevant, mission-oriented signal processing sub-systems at the source-code level, for emerging signal types and complexities. AI provides the opportunity to identify and flag standard, non-standard and modified modulation (non conformity) and classify outside of the expected norm.
AI and machine learning can assist the technical operator and software engineer with developing new tools that can predict and adapt to a wide range of changing situations; help optimise system performance, filter signals in real-time, hand-off high-probability events to a human operator or invoke additional analytical resources to surface exploitable intelligence.
Real-world wireless signals and extracted signal-level intelligence are often of poor signal quality – noisy and unreliable, sometimes by threat actor design – and cannot be relied on for high-quality intercept applications, radio direction-finding and ultimately, localisation. Advances in AI allow automatic signal quality metrics to be processed far beyond the ability of the human operator.
Noise filtering and on-the-fly adaptive signal processing techniques can support the removal or minimisation of unwanted spectral images and artifacts, resulting in cleaner and more reliable intelligence-bearing data within a shared spectral environment where many layers of ambient signal events can mask the one hostile signal, the operator needs to identify.
The use of predicative modelling across the signal processing environment touches the core of many advanced algorithmic models, used within the greater SIGINT role, to facilitate event time prediction, anomaly detection, image and speech recognition patterns, and the separation of ambient signals from potentially hostile signal events outside of the trained spectrum parameters.
SIGINT/TSCM platforms utilise AI at the signal analysis level, but can also be employed at the detection and capture level to manage the storage and hand-off of more relevant intelligence that directly assists in the clarification of big picture analytics. This important function is accomplished by identifying trends and patterns, based on and within the dataset that might not be immediately apparent to the SIGINT operator.
This big picture is more effectively disseminated across the intelligence network faster and more confidently, and is used to make more informed decisions, identify strategic opportunities and facilitate exploitation possibilities or proactively resolve complex challenges across various interoperable stake-holders.
AI and machine-based learning are interrelated and uniquely distinct concepts that are deeply imbedded and co-exist, within a SIGINT/TSCM sub-system for use as a Remote Spectrum Surveillance and Monitoring (RSSM) platform.
AI tends to be a much talked about catch-all process that refers to the use of technology solutions to
build SDR and controllers that have the ability to
mimic cognitive functions associated with human-intelligence and applied human-intellect. This allows the machine to see beyond the dataset, intuitively understand, and analyse spectral elements not only at the signal and band level, but also, classify, identify patterns, correlate significant events and process informed recommendations or assess the relevance of the intercept.
AI is thought of as a self-supporting sub-system within a larger mission platform with unique requirements. AI is a powerful set of technologies (AI engine) implemented within a SIGINT/TSCM platform allowing it to reason, learn and act to solve seemingly complex real-world challenges faster and more accurately across a globally connected radio-frequency environment.
As a comparatively equal partner, the Machine Learning process is a subset of AI that automatically enables a machine or system to learn and improve from experience (lots of experience)! Instead of hard-coding, machine learning uses algorithms to analyse large amounts of spectrum data (referred to as explicit programming, ironically), by learning from thousands or tens of thousands of iterations of every possible variable to then process an informed decision beyond the source-code.
Competent machine learning algorithms improve performance over time as they are trained by exposure to vast amounts of unique layers of spectral signal-level datasets, acquired from many different time periods, geographic locations, type of SDR hardware and a wide range of operator defaults and setting variables.
Machine learning provides the process; not to see a perfect world spectral environment – which arguably does not exist – and more accurately see and learn the spectrum, from a less than perfect, typical operator-deployed viewpoint, which tends to be the norm.
ML is a combination of the ability to structurally morph the best spectra, poor-quality spectra, interfered with spectra, captured in less-than-optimal conditions spectra and other factors such as different radio hardware, antennas, geographical locations and distances from emitters, power levels, modulation schemes, software features, settings and operator experience.
AI is the concept of enabling a machine or system to sense, reason, act or adapt like its human counterpart. ML is the application of AI that allows machines to extract imprinted knowledge from real-world datasets and learn from it autonomously.
An AI engine’s structure can vary depending on the specific application, but generally it includes several key considerations and elements. The driving algorithms can be simple instructions or complex mathematical computations; calculations and coded rules that are used to solve a problem or analyse a dataset.
The purpose of spectral-based machine learning as an AI technique is that it uses algorithmic mathematical computations to build a predictive model at the software level. A coded algorithm is used to parse datasets and then learn from that data by identifying and correlating spectral patterns discovered to generate interpretive models. Taking this concept even further, deep learning is a type of machine learning that can ultimately determine on its own whether its predictive result is accurate.
AI uses artificial neural networks, consisting of multiple layers of applied algorithmic functionality. Each layer of the algorithmic neural network analyses spectral data and then performs an independent analysis, and processes a correlated output that other layers of the neural network can interpret and process.
Neural Networks are a large array of algorithms that closely mimic the operations of the human brain and are used to recognise relationships within a large spectral dataset. This is critical in a deep learning context and uniquely designed to interpret and respond to sensory input data via machine perception, rendered labeling or clustering of raw input spectral energy patterns within a total energy capture environment.
Data preprocessing resources are tools used to filter and transform large amounts of raw data into a format that can be understood by the AI engine and travel efficiently across the neural network. Training and validation resources are tools used to train the AI engine using a large dataset of good, bad and ugly datasets, and then validate the accuracy of the engine’s predictive solutions. Analytical metrics are necessary to measure the performance of the AI engine and adjust it over time as the ambient radio-frequency spectrum changes, new threat technology emerges or never-before-seen technology is introduced.
The AI engine should also be able to recognise and learn from previously unknown signal events, but will only be able to classify the signal as unknown. This is a positive and desirable response, rather than rendering a false classification. This implied limitation can be overcome with third-party integration resources that allow the AI engine to draw from a larger, more expanded dataset; integrate and learn from other datasets, software or platforms. All of the key elements work together to enable the AI engine to analyse data, learn from it, make predictions and improve over time in a cyclic approach.
The care and feeding of a neural network is fundamental in populating an AI engine via a machine-learning approach. There is no easy route or substitute for a highly focused and scientific approach to populating a competent dataset. When shortcuts are taken the engineer or operator will have no idea of what the expected outcome will be and cannot have any measure of confidence in the capability of the platform.
We have seen all too often within the TSCM environment competitive interests rushing to market with only a financial prize as the primary goal. Taking shortcuts and half measures at the technology level is a very short-sighted business model and tends to fool almost everyone concerned, except the threat actors!
When it comes to an AI solution within a SIGINT/TSCM role, we are seeing competitive interests simply rebrand their respective products with implied TSCM/SIGINT capability, when in reality, this is simply a marketing ploy with limited compatibility, third-party products.
A circular 360° approach is essential to advancing the benefits of AI within a modern standards-based methodology. The development and implementation of a functional AI sub-system takes research, dataset development and competent implementation within a fully qualified software defined radio environment.

Paul D Turner, TSS TSI is the President/CEO of Professional Development TSCM Group Inc., and is a certified Technical Security Specialist (TSS) and Technical Security Instructor (TSI) with 44 years experience in providing advanced operator certification training, delivery of TSCM services worldwide, developer of the Kestrel TSCM Professional Software and manages the Canadian Technical Security Conference (CTSC) under the operational umbrella of the TSB 2000 (Technical) Standard.