A new opportunity would fund development of an AI framework to coordinate actions between a mix of machines on the battlefield.
Humans learn by doing. The shared experience of hardship, enduring and overcoming is what bonds disparate recruits into functional teams, who over time learn each other’s weaknesses and strengths and, ideally at least, then adapt to best use each other. As more robots move onto the battlefield, DARPA wants those machines to work together, learn from each other to do better and move away from actions which cause regret. To spark research into this area, the Pentagon’s blue sky projects wing launched “CREATE,” or “Context Reasoning for Autonomous Teaming.”
The Artificial Intelligence Exploration Opportunity, announced Sept. 3, looks for research into how a group of small and disparate uncrewed vehicles could work together autonomously. Phase 1 requires feasibility studies, and Phase 2 is refining AI teaming techniques and algorithms from Phase 1 to work on vehicles with existing hardware, in simulation or on the actual hardware.
In much the same way that a group of people make decisions together on the fly, the solicitation notes that “local decision making is less informed and suboptimal but is infinitely scalable, naturally applicable to heterogeneous teams, and fast.”
For robots that have to work together in battle, those last traits are especially important, as they allow independent autonomous action, “thus breaking the reliance on centralized C2 and the need for pre-planned cost function definition.”
This is a step beyond the remotely directed and controlled systems of today, which use extensive communications networks to give humans fine-tuned controls over how machines move. Should those networks break down, machines that can move toward objectives on their own is a goal, even if those moves are less efficient or effective than the choices a human operator would have made. Advances in electronic warfare, combined with fears about the the loss of communication networks, both terrestrial and in orbit, are part of what’s driving military research and investment in autonomous machines.
What sets CREATE apart from, say, swarming systems of quadcopters, is that DARPA wants to find a framework that can communicate with a heterogeneous group of machines: likely quadcopters and unmanned ground vehicles too, different kinds of flying and swimming robots. In other words, a whole mechanical menagerie working to a similar purpose. With the right AI tool, the machine-machine team should be able to discern the context of where they are, what is happening, and then act independently. In addition, they can meet multiple spontaneous goals that arise over the course of a mission.
Getting to that point means a system that can learn and, especially, a system that can learn from mistakes.
“Agents within the team will have mechanisms for regulation to ensure (favorable) emergent behavior of the team to (1) better ensure the desired mission outcome and (2) bound the cost of unintended adverse action or ‘regret,’” reads the solicitation.
Bread & Circus Perfect Product Placement
This years Super Bowl commercials took advantage of embedding AI’s technological advancements into the mind of the horde like this one below from the Agency FCB Chicago. How can you compete against robots that out-run, out-bike, and out-perform humans in just about every way?
Experts warn of actual AI risks – we’re about to live in a sci-fi movie
Long before artificial intelligence (AI) was even a real thing, science fiction novels and films have warned us about the potentially catastrophic dangers of giving machines too much power.
Now that AI actually exists, and in fact, is fairly widespread, it may be time to consider some of the potential drawbacks and dangers of the technology, before we find ourselves in a nightmarish dystopia the likes of which we’ve only begun to imagine.
Experts from the industry as well as academia have done exactly that, in a recently released 100-page report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, Mitigation.”
The report was written by 26 experts over the course of a two-day workshop held in the UK last month. The authors broke down the potential negative uses of artificial intelligence into three categories – physical, digital, or political.
In the digital category are listed all of the ways that hackers and other criminals can use these advancements to hack, phish, and steal information more quickly and easily. AI can be used to create fake emails and websites for stealing information, or to scan software for potential vulnerabilities much more quickly and efficiently than a human can. AI systems can even be developed specifically to fool other AI systems.
Physical uses included AI-enhanced weapons to automate military and/or terrorist attacks. Commercial drones can be fitted with artificial intelligence programs, and automated vehicles can be hacked for use as weapons. The report also warns of remote attacks, since AI weapons can be controlled from afar, and, most alarmingly, “robot swarms” – which are, horrifyingly, exactly what they sound like.
Lastly, the report warned that artificial intelligence could be used by governments and other special interest entities to influence politics and generate propaganda.
AI systems are getting creepily good at generating faked images and videos – a skill that would make it all too easy to create propaganda from scratch. Furthermore, AI can be used to find the most important and vulnerable targets for such propaganda – a potential practice the report calls “personalized persuasion.” The technology can also be used to squash dissenting opinions by scanning the internet and removing them.
The overall message of the report is that developments in this technology are “dual use” — meaning that AI can be created that is either helpful to humans, or harmful, depending on the intentions of the people programming it.
That means that for every positive advancement in AI, there could be a villain developing a malicious use of the technology. Experts are already working on solutions, but they won’t know exactly what problems they’ll have to combat until those problems appear.
The report concludes that all of these evil-minded uses for these technologies could easily be achieved within the next five years. Buckle up because they are here.
2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)
In 2016, the Fifth Review Conference of the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS). The GEE held its first meeting from 13 to 17 November 2017 in Geneva.
At their 2017 Meeting, the High Contracting Parties to the CCW agreed that the GGE on LAWS shall meet again in 2018 for a duration of ten days in Geneva. The first meeting of the GGE on LAWS in 2018 took place from 9 to 13 April. The second meeting will be held from 27 to 31 August 2018. The meeting will take place in Conference Room XVIII on 27 August and in Room XX from 28 to 31 August 2018. Ambassador Amandeep Singh Gill of India is the chair of both meetings of the GGE on LAWS.
The final report of the 2017 meeting of the GGE on LAWS, particularly the “Conclusions and Recommendations” section, provides guidance and direction for the work of the GGE to be undertaken in 2018.
The overarching issues in the area of LAWS that will be addressed in the 2018 meetings of the GGE include:
- Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the CCW;
- Further consideration of the human element in the use of lethal force; aspects of human-machine interaction in the development, deployment and use of emerging technologies in the area of lethal autonomous weapons systems;
- Review of potential military applications of related technologies in the context of the Group’s work;
- Possible options for addressing the humanitarian and international security challenges posed by emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention without prejudging policy outcomes and taking into account past, present and future proposals.
Emerging Commonalities, Conclusions and Recommendations (including Possible Guiding Principles) – Unformatted advance version
The Chair of the CCW GGE on LAWS would like to invite ALL non-governmental actors to contribute reflections, ideas, insights and experiences to enrich the April and August deliberations of governmental experts. Please refer to the newly released programme and agenda to frame your contribution(s).
You may also submit contributions to email@example.com
- IE: Context Reasoning for Autonomous Teaming (CREATE)
- Squad X Program Envisions Dismounted Infantry Squads of the Future
- DARPA Announces $2 Billion Campaign to Develop Next Wave of AI Technologies
- AI Next Campaign
- Heard Of “Squad X”? – What You Don’t Know; Can Kill You!
- Video Analytics Market worth 8.55 Billion USD by 2023
- The Malicious Use of Artificial Intelligence
- Robots of death, robots of love: The reality of android soldiers and why laws for robots are doomed to failure
- 2018 Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS)
- US Army clarifies its killer robot plans – Naked Security
- Give robots ‘personhood’ status, EU committee argues | Technology | The Guardian
- Saudi Arabia’s Robot Citizen Wants A Family, Career & Human Emotions – News Punch
- Meet the Artificial Intelligence Program That’s Learning Everything | The Daily Sheeple
- Chemtrails: Aerosol and Electromagnetic Weapons in the Age of Nuclear War | Global Research – Centre for Research on Globalization