The Evolution of Traditional Surveillance: Artificial Intelligence Technology Makes Bad Guys Shapeless

[NetEase Smart News, August 29] Over the years, we've all seen footage from security cameras, police surveillance, live streams, and even strangers' social media posts. Even if we wear cameras on our heads, there's no guarantee that others won't interfere with our recordings. Searching for a specific individual in months of surveillance footage is like looking for a needle in a haystack—it’s simply impractical without consuming massive amounts of time and energy. But this isn’t the case for machines. In Hollywood thrillers, this kind of technology has long been depicted, where tools are developed to identify identities and actions within videos and images. Companies like Facebook and Baidu have been investing heavily in this AI technology. As error rates continue to decrease and the scope of these systems widens, we can anticipate that every piece of video will soon be analyzed to identify characters, objects, and behaviors inside. For decades, artificial intelligence researchers have been striving to create algorithms capable of recognizing images and interpreting their content. The complexity of images, each made up of millions of pixels forming intricate patterns, makes traditional hand-coded algorithms insufficient. In 2012, researchers introduced a technique called deep learning, transforming the concept of interconnected neurons in our brains into mathematical functions that perform exceptionally well when handling vast numbers of images. With enough examples, a deep neural network system can detect shared patterns across different images, such as the shapes and textures of various cats. Since then, these systems have grown in scale and complexity. Researchers began building larger "neural networks," while hardware manufacturers like NVIDIA started creating specialized processors to accelerate these networks. This led to exponential improvements in what these systems could achieve. With sufficient image or video data, these systems can be trained to understand appearances and consistently identify individuals. A well-known example is the MegaFace dataset at the University of Washington. This dataset contains nearly five million images of 672,000 people sourced from Flickr. In July this year, the MegaFace team showcased the latest results of algorithms trained on these datasets. When matching two faces of the same person from separate datasets with one million photos, the top team achieved an accuracy rate of 75% with a single chance, rising to over 90% with ten chances. "We need global testing of facial recognition capabilities to ensure practical applications—large-scale testing helps us identify both flaws and successes in recognition algorithms," said Ira Kemelmacher-Shlizerman, the University of Washington professor leading MegaFace, in an interview with the university's publication. Similar technology is applied to video, capturing still images and requiring higher processing power to understand events over time. At the end of August 2017, Baidu announced winning the "ActivityNet" challenge, correctly tagging human behaviors in 300,000 videos with an accuracy rate of 87.6%. Examples include chopping wood, window cleaning, and walking dogs. Facebook also shows interest in this technology, aiming to understand who is in real-time videos and what they are doing. Joaquin Puijalon, Director of Applied Machine Learning at Facebook, mentioned last year that ideally, Facebook would understand each real-time video's context to provide personalized video channels for users. The U.S. government has already started implementing this technology in limited capacities. Last week, the New York Motor Vehicle Authority announced arresting over 4,000 individuals through facial recognition technology. The software doesn’t scan surveillance videos but compares driver’s license application photos with existing images in the database, making it harder for fraudsters to assume others’ identities. If states or the federal government expand facial recognition in public spaces, they’ll have a database covering more than 50% of American adults. Larger datasets only enhance AI performance. This might not be far off. Axon, known for its stun guns and as the largest camera vendor for U.S. police agencies, has ambitions to integrate AI into its products. Earlier this year, it acquired two AI companies. Axon CEO Rick Richardson told Quartz that the ideal use case for AI would be generating incident reports objectively, allowing police officers more time to focus on fieldwork. He noted that facial recognition is not yet widespread but could flourish in the future. Motorola, another key provider of cameras, highlights its software's ability to quickly recognize facial expressions, aiding police searches for missing children. Security cameras are also pushing AI forward. In April, Intel announced hardware for security cameras that can perform "dense monitoring, stereoscopic vision, face recognition, counting," and "behavioral analysis." The site claims another camera called DNNCam is a waterproof, self-sufficient deep-learning camera that's "almost indestructible," meaning it can operate in remote areas without internet access, including identifying returning customers behind cash registers. But what can privacy-conscious, law-abiding citizens do if surveillance becomes the norm? Not much. Previous research has found ways to trick face recognition software, either by crafting special glasses or adding graffiti to fool AI. However, this often requires understanding how facial recognition algorithms function. (From: Quartz Media; Editors: Wu Man) [End of article]

Processor Chip

Shenzhen Kaixuanye Technology Co., Ltd. , https://www.icoilne.com