Microsoft is banning the operate of its artificial intelligence facial recognition service “by or for” police in the United States.
In my updated code of conduct for Azure OpenAI, the company also said that integration with the service cannot be used for real-time facial recognition technology used by “any law enforcement agency anywhere in the world,” including mobile and dashboard cameras, “to attempt to identifying individuals in an uncontrolled, “wild” environment” or “to identify individuals in a database of suspects or former prisoners.”
Azure OpenAI service provides enterprise customers with access to OpenAI vast language models (LLM). The service is fully managed by Microsoft and narrow to customers who have an existing relationship with the company, operate it for lower-risk situations, and are focused on mitigating risk.
Last week, Axon, a company that produces technology and weapons, including for law enforcement, launched Artificial intelligence software enables police to automate reports. Product, called Draft One, is a “revolutionary novel software that creates high-quality police reports in seconds.” Axon said Draft One runs on OpenAI’s most powerful LLM, GPT-4, and can write reports that automatically transcribe audio from police cameras sold by the company.
However, critics of the tool said it could be problematic to consider it AI has problems with ‘hallucinations’ — or making up false or nonsensical information — and can be used to exonerate police from legal liability if a report contains wrong information.
Dave Maass, director of surveillance technology research at the Electronic Frontier Foundation, he told Forbes Draft One is “kind of a nightmare.” He added that police are not trained in the operate of artificial intelligence tools and therefore may not understand the scope of the technology.
It is unclear whether Microsoft’s updated code of conduct is related to the Axon release.