
OpenAI alters usage policy, removes explicit ban on military use
OpenAI, the inventor of ChatGPT, revised its usage regulations on January 10 to remove an outright prohibition on utilizing its models for military and warfare objectives. While the revised policy does not expressly prohibit military use, it does state that users must not use their services to cause harm to themselves or others. One of the significant hazards cited by the company is the development or use of weapons.
Since the policy amendment, OpenAI has indicated that national security uses of AI are consistent with its objective. "For example, we are already collaborating with DARPA [the United States' Defense Advanced Research Projects Agency] to accelerate the development of new cybersecurity tools to secure open source software on which critical infrastructure and industry rely."
Highlights:
- OpenAI Relaxes Military Use Restrictions in Model Regulations.
- India's Response: Acknowledging AI's Military Potential.
- Implications for India: Data Protection, Security Vulnerabilities, and Strategic Concerns.
Why is it Important?
While OpenAI has described the amended policy as a result of its work on cybersecurity technologies with DARPA, the change suggests that the company is relaxing its stance on the military's use of artificial intelligence (AI). AI has been used by the US military for some time. According to the Associated Press, the US military used AI to pilot small surveillance drones during the Russia-Ukraine conflict. It has also used AI to measure soldier fitness, keep track of opponents in space, and determine when Air Force planes require maintenance. It would be intriguing to see if OpenAI and other AI startups will collaborate with the US military and militaries from other countries for different goals.
Interestingly, India's IT Minister, Rajeev Chandrasekhar, cited this new usage regulation as "confirmation that AI can and will be used for military purposes." He went on to say that this reinforces India's position on regulating AI through the lens of safety, trust, and accountability.
Takeaway: OpenAI's conditions have been discreetly amended to allow it to work with the military and for warfare. This is a concerning development, particularly given that OpenAI has scraped a vast quantity of publicly available data from throughout the world. While it states that its technology should not be used for harm, this does not exclude it from being employed for military and warfare reasons.
Now, how does the use of AI in military and battle affect India? I don't want to be alarmist, but IF this is a sign of purpose, consider the following:
1. No data protection: India's data protection law exempts publicly available personal data from protection. It can be used for surveillance, training, and strategic planning, as well as microtargeting specific individuals. We made the same error with the data privacy legislation.
2. Generative AI can be used to analyze massive datasets in order to uncover and identify weaknesses and cyberattack techniques.
3. Identifiable security personnel's data is especially vulnerable. Location data of security personnel on patrol, for example. Do you recall the Strava data leak? It can be used for mission planning and simulation exercises.
Our vulnerability cannot be our downfall. Again, what I'm writing here is meant to provoke thought. We don't know what openAI's intentions are, and we shouldn't put our faith in it blindly. It is on them to reassure users and countries where it is in use, and it is up to our government to seek information to safeguard our safety.