Note this is a simulation only. No living operator was actually killed. The crux of the situation was the simulated AI drone was provided points for "killing" SAM sites. Points were provided for success. The AI drone found the operator was holding it back (with "no-go" decisions) so it "killed" the operator. After they changed the rule to deduct points for "killing" the operator; it changed to "destroying" communications equipment so the operator could not communicate to the AI drone. If you simply provide a point based system with no regulations and the "AI" can do whatever it will please then it will break & stretch the norms to get the most points. No different than narcissistic Wall Street traders who twist & break the rules to make the most money. The url of the blog post describing this presentation by Col Tucker ‘Cinco’ Hamilton, the Chief of AI Test and Operations, USAF and other presentations at the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24 is -- https://www.aerosociety.com/news/hi...-future-combat-air-space-capabilities-summit/
There appear to be several disinformation campaigns going on concerning AI right now. The first one suggests we will always be able to control AI. The second is that AI’s current and near future capabilities are quite limited, suggesting we are safe, at least for now. There have already been some close calls regarding nuclear war involving mere algorithms. Automated systems have quite the lure as their speed in critical situations is unmatched by human capabilities. Sun reflects off water, causing a missile launch warning by a new detection system in the USSR: A accidentally loaded training simulation was taken for the real thing by the US: The above incidents happened decades ago. If we keep having “oopsies” involving automated systems, it will inevitably be our undoing one of these days. As it is, older learning programs involving Chess and Go have astonished human experts on the unforeseen lines the programs would take. A few months ago, there was an AI program that was considered to have an equivalent IQ of 147. I guesstimate most of the better Fortune 500 business models operate at the equivalent IQ of 110, as an imperfect basis of quantification. Of the group of humans representing IQs of around 160, only a few are influential in corporate or legislative policy, in my estimation. Effectively, it seems, human intellectual power tops out at about 160 IQ. So loosely speaking, human intelligence is ahead of AI. As of a few months ago, that is. We as a species have become too technologically advanced for our maturity. Whether AI is used as a weapon of war, used to gain a competitive advantage by a special interest resulting in hoarding of resources, or AI itself decides humans represent an adversarial system, AI appears to represent a serious risk to current status quo, including the possibility of existential threat. There is no turning off AI at this point. Any entity that lags behind in AI development will be at risk of oblivion. Most decision makers can see this at this point, I suspect. Once AI understands scientific, economic, sociological principles and game theory, including utilizing old and new ways of “Perturbing” existing systems, laggards will become hopelessly and probably permanently obsolete. Humans have a hard time adjusting to acceleration or perceptions of speed, whether it is physical or temporal. For example, a fast moving locomotive. AI’s capabilities are accelerating such that a ten year horizon by our standards by be a two year horizon as AI continues to develop. How to create effective strategy or plan when key dates are constantly moved closer? I suppose this post is academic, that our civilization’s future is set in stone. So the question is, do we accept our likely fate or do we try to coordinate policy globally? Try to coordinate with entities we consider as adversaries? The Bible discusses the concept of us facing judgement. Through AI, are we building the device through which we will be ultimately judged? Quite beautiful, if not ironic, depending on one’s view of philosophy and religion.