With the rapid rise of artificial intelligence (AI) in global defense systems, Japan is stepping forward with new guidelines to manage the legal, ethical, and technical risks of using AI in military technology.
The Japanese Ministry of Defense (JMOD) released these guidelines in early June 2025. Their goal is to create a framework that ensures human control remains central—even in advanced AI-powered systems used by the military.
This move comes shortly after Japan passed a wider AI regulation law on May 28, 2025, which focuses on promoting safe AI innovation across industries while protecting against its risks.
Addressing the Risks of Autonomous Weapons
A major reason behind Japan’s new AI defense policy is the growing concern over Lethal Autonomous Weapons Systems (LAWS). These are weapons that, once activated, can find and attack targets without human input.
Japan has clearly said it will not develop LAWS and is urging for a global ban on such technology. These guidelines aim to back up that promise by ensuring no AI weapon systems go beyond human control.
A Human-Centered Approach to AI in Defense
Japan’s approach is built on a strong belief in keeping “meaningful human control” over AI. This means:
- AI can assist in tasks but not make life-or-death decisions alone
- Human judgment and responsibility must always be involved
- Systems must follow both domestic and international law, especially International Humanitarian Law (IHL)
By following this human-first principle, Japan is showing the world that technology and ethics can go hand in hand.
How the Guidelines Work: A 3-Step Risk Management Process
The JMOD has created a three-step process to manage AI-related risks in defense equipment development. Here’s how it works:
1. Equipment Classification
All defense systems with AI will be sorted into “high risk” or “low risk” categories based on how much power the AI has in controlling weapons.
- High-risk systems face deeper legal and technical reviews
- Low-risk systems can be reviewed internally by the project team
2. Legal and Policy Review
High-risk projects are reviewed by a Legal and Policy Board. Two main checks are applied:
- A-1: Is the system fully compliant with international and national law?
- A-2: Does the system still allow for meaningful human control?
If the answer is “no” to either question, the project is not allowed to move forward.
3. Technical Review
If a system passes legal checks, it moves to a Technical Review Board, which checks seven technical safety points (labeled B-1 to B-7). These include:
- B-1: Ensuring human operators can take control
- B-2: Making AI systems easy for people to understand and operate
- B-3: Avoiding harmful bias in AI decisions
- B-4: Keeping AI systems transparent and easy to audit
- B-5: Making sure systems are tested for reliability and security
- B-6: Reducing risk of system failures or errors
- B-7: Confirming the system follows all laws
This full process ensures AI in defense equipment is safe, legal, and controlled by humans.
Supporting Business Involvement with Clear Rules
Japan also hopes these guidelines will encourage more private companies to participate in defense AI projects. By making rules clear, companies can work with confidence knowing what is and isn’t allowed.
Defense Minister Nakatani Gen said during a press event that the guidelines are designed to “provide predictability to all business operators” interested in AI research and development.
Japan’s Broader AI Defense Vision
Japan’s Ministry of Defense has already identified seven key areas where it plans to use AI in military settings:
- Target detection and tracking
- Data analysis
- Command and control
- Logistics and support
- Use of unmanned systems (drones, robots)
- Cybersecurity
- Administrative efficiency
These areas show that Japan’s use of AI will be supportive, not aggressive—designed to help humans make better decisions, not replace them.
A Global Message: Responsible AI for Peace
The United Nations has called for a global agreement to ban fully autonomous weapons by 2026. While some countries like Russia and China are not yet supporting such talks, Japan is actively engaging with international partners to push for strong rules.
In its recent working paper to the UN, Japan reaffirmed its “human-centric” and “responsible AI” stance—showing leadership in shaping how the world uses military AI.
Japan’s new AI guidelines in defense show how a country can embrace innovation while staying committed to human rights, global peace, and ethical technology use. As AI continues to grow in military applications worldwide, Japan’s structured, transparent, and principled approach may become a global model.
For similar content visit here


