Human Centered Design: What Warfighters Need to Win, Part II 

Aaron Festinger - Machine Learning Engineer
By Aaron Festinger, Machine Learning Engineer 

In Part I of this series, we said building in ease of use by practicing Human Centered Design (HCD) is essential to creating practical, effective tactical equipment. HCD enables development of robust tools that simplify tasks. And in the case of the warfighter, these tools can save lives.

In Part II of this series, we’ll discuss acoustically based artificial intelligence for situational awareness that can truly support the warfighter when designed to be lightweight and simple.

Why acoustic AI for situational awareness? 

When tactical artificial intelligence comes up, people tend to think of visual interfaces such as the HUD (heads up display) that provide line of sight or augmented reality capabilities. Acoustically based artificial intelligence and interfaces are often overlooked or only considered as augmenting the visual interface. Why this happens is a matter of speculation. Perhaps it’s because acoustic interfaces seem prosaic and don’t provide the futuristic thrill associated with augmented reality. It may also be a bias inherited from everyday life where a picture is thought to be worth a thousand words and visual data is king. Whatever the reason, it is a serious oversight to ignore acoustics in the tactical environment.

We hear before we see – The first signs of danger and the natural early warning system is typically an acoustic phenomenon. Visual information may deliver more data, but it’s usually slower and comes in via a high entropy channel compounding the burden of situational awareness for the operator. For all the promise of visual-based artificial intelligence—and there is indeed much promise and demand—some features of an acoustic interface cannot be matched. Visual-based AI systems promise a significant expansion in capabilities for the operator, but they do so at the cost of adding complexity to the operator’s kit, tasks, and visual field. An acoustic-based AI could deliver faster, simpler, and more immediately important information than a visual-based AI, and in so doing, could simplify the operator’s operational environment. While it is inevitable that an acoustic-based AI could also add complexity to the operator’s kit (computer systems are complex enough to introduce multiple failure modes regardless of their form), the extent of the complexity footprint is tunable and potentially much smaller, involving fewer pieces of hardware, lower power requirements, and little to no networking.

Situational awareness is a mustSituational awareness is one of the biggest challenges in conducting ground operations. Moving quietly, often in the dark, loaded down with equipment, giving attention to all tasks that need attention is a major challenge. Teammates fall away, pull ahead, or cross lines while trying to watch out for the enemy. They miss signals and confuse directions. When someone opens fire, they may have no awareness of their location, the size of the force, or even their direction of fire. To compensate and quickly spread awareness of an attacking enemy unit, infantrymen are trained to immediately announce the “3Ds,” the distance, direction, and description of the enemy. This can be effective, but it can also easily fail if the operator is unable to glean all the information accurately and pass it on. An injured teammate may go unnoticed, or a soldier may become separated from his platoon. There is no fix other than for leadership to constantly stress the importance of situational awareness during training. How might artificial intelligence be used to help close gaps in situational awareness? The answer is always sound.

Based on sounds alone, it is possible to identify:

  • All movement in an area
  • A count of maneuvering troops
  • When and what kind of injury an operator sustains
  • Distance, direction, and description of all participants (including weapon types and fire orientation)

Acoustic AI can prioritize data – Operators may be unable to make use of this information as their hearing becomes overwhelmed, but it remains possible for a properly tuned ear (biological or electronic) to glean some of this information. For example, counting ammunition usage between firefights, it is easy to hear what type of ammunition is used, the number of rounds fired, where the weapon was fired from, and when a weapon runs out. But operators typically do not have the wherewithal to use this information. Acoustic information may also be underused in analyzing military vehicle engine sounds, such as those from an HMMWV (a Humvee). An expert ear may be able to hear a host of potential problems before they manifest in the sound of the Humvee’s engine, but an infantryman typically cannot. Similarly, the attentive ear can discern when a squad mate is panicking or struggling physically, but this requires attention and a keenness of observation that may not be in the offing as a given crisis is developing.

The common thread in all of these examples is that acoustic information is available to the operator but cannot be reliably exploited. Acoustic AI would be well situated to act as an “attention prosthetic” in these situations, simplifying the operational environment for the operators and forestalling the development of crisis situations that could hold up the mission and exact a cost in lives and equipment.

In Part III of this series, we’ll discuss the feasibility and potential pitfalls of developing and implementing acoustic AI solutions. Questions? Contact a member of Team Octo to discuss our approach.