AI is a broad concept, referring to a computer or system that mimics the “thought” processes and tasks of a human, whereas ML is a subset of AI — the very technologies and methods that train a computer or system to analyze data, learn from it and improve it. And if you didn’t realize it, ML/AI are essential components of the applications we use in our daily lives, from Google Maps and social media to smart home devices and medical diagnostics. So it’s no surprise that companies are increasingly relying on ML/AI for their growth. Yet with innovation in the industry comes the need for security to avoid chaos and compromise. “Complexity is the biggest enemy of security,” says Bruce Schneier, a security technologist and lecturer of public policy at the Harvard Kennedy School. “AI systems have their own special vulnerabilities, in addition to all of the vulnerabilities that have to do with them being computers.” To fill in the gaps in security, an innovative new platform called HiddenLayer uses a noninvasive software approach to observing and securing ML/AI. The company offers a suite of security products that specialize in protecting ML/AI algorithms, models and their underlying data from adversarial attacks, vulnerabilities, and malicious code injections or data poisoning — which means giving the AI an opportunity to “learn” something incorrect to devastating consequences. For example, if an AI is being trained on distinguishing tanks from cars, a hacker could “trick” the AI into thinking that a tank is a car, using a set of images of tanks with fake bumper stickers on them. It’s these more unusual data poisoning scenarios that are not included in the initial tests and validation of AI systems. “AI is a powerful tool that you don’t want to just leave unhinged,” says HiddenLayer’s VP of engineering, David Beveridge. HiddenLayer’s products are mostly cloud-based but also allow the customer to self-host. “So we can set the systems up in your company’s cloud environments,” explains Beveridge. Additionally, the platform offers consulting services in cybersecurity, AI, reverse engineering and threat research. According to Beveridge, “the vast majority [of companies] are completely unguarded as far as we can tell. It’s kind of like the internet in the ’90s.” The reason for this, says Schneier, is that “the market rewards features, scalability, speed, low cost; everything except security. For AI in particular, it’s a vast race to grab market share and profitability and monopoly status, and security is a minor afterthought. It’s also pretty new, and we’re just learning about those unique ML/AI vulnerabilities.” As far as who is behind the attacks, Beveridge explains that neutral hackers are typically academic researchers, individuals or security organizations — all of whom will break into a system to expose its weaknesses and then publish their findings online, For almost two and a half gripping hours, the new Netflix thriller Leave the World Behind, starring Julia Roberts and Mahershala Ali, portrays a terrifyingly dystopian reality — one where an ambiguous cyberattack has destabilized the United States. No phones, no internet, no GPS. Communication is dismantled and information is sparse and disorienting. Perhaps what is most unsettling about the film, however, is its eerily close proximity to a reality we’re teetering toward. With targeted hacks and attacks, the cyber framework we’ve so blindly hung our survival upon will crumble, taking our civil society with it. Yet one way in which companies can protect themselves against a cyberattack — of even a much lesser magnitude than depicted in sci-fi thrillers — is to secure their artificial intelligence and machine- learning (ML) models from being compromised. Locking It Down Security platform HiddenLayer offers protection for AI systems. BY MELANIE SEVCENKO AI AN IN-DEPTH REPORT David Beveridge of HiddenLayer 36
RkJQdWJsaXNoZXIy MTcxMjMwNg==