It is no secret that cognitive technology is changing countless industries worldwide, and in many cases, these changes have been objectively positive and constructive. However, as these technologies continue to become increasingly normalized aspect of our culture, the management of such technologies remains in flux. When it comes to security and public safety, some flaws to this growing technology can present a glaring risk to our general wellbeing.
Well intentioned as they may be, Artificial Intelligence (AI) and ML-based recognition tools, are creating a converse effect on the individuals and institutions they strive to protect. While I am not against the implementation of technology as a security force multiplier, they must be used to augment the human operator, and not serve as a replacement or excuse to hire less qualified (cheaper) personnel. This presents a long-running debate in security and crime prevention. For example, one of the most common security technology investments is in video surveillance equipment. While this is, without question, one of the most vital physical security investments an organization can make; it’s important to note that video surveillance is evidentiary in nature, thus recording crime vs. preventing it. If you buy cameras before investing in access control, alarms and the personnel to monitor the video and respond to potential threats, your investment may have been made in vain. This is an viewpoint I have held steadfast to for a long time, but recent developments and media stories have only reinforced its credibility with regards to the future of both security and public safety. Take, for example, recent developments in Metropolitan Police biometrics, in which the technology was found to be 98% inaccurate.
Finding a way around
Despite a variety of industry advancements in recent decades, there remains a sad but true concept within the security world: when a new safety concept emerges, criminals will try to find a way to sabotage it.