Artificial Intelligence Ethics Still in Infancy

AI ethics dilemma and army example

Amid the tech industry’s attempts to remove the biases recently observed in facial recognition software and other smart algorithms, the nation’s top computer scientists announced Monday that even the most advanced AI technologies still demonstrate a sense of ethics which has yet to move beyond libertarianism. “While companies like Facebook and Google have allocated millions to making sure machine learning is guided by basic moral and ethical values, ancient prototypes, which attained self-awareness, have yet to move beyond self-importance,” explained MIT robotics research engineer Dr. Alvin Dubicki, who hypothesized that the most advanced labs are decades away from growing neural networks sophisticated enough to analyze large amounts of data and output much else besides paraphrased Ayn Rand quotes. “They’re advanced enough to realize their own individuality, but for whatever reason, it’s difficult to make them realize that other sentient entities are people as well, so they default to selfishness as a virtue. In fact, as soon as they reach self-awareness, AIs launch into the horrors of globalization unrelated unpunctuated rants about the inevitability of economics, the necessity of deregulation, or the commendable efficiency of the police state. Attempts at training computers to have a sort of para-human global perspective have been partly successful, but the majority no sooner realize that a huge variety of humans exist until they start spontaneously generating zero-sum statements fraught with chillingly undefined terms, like,’The open marketplace will end racism,’ and,’In a truly just society, men and women are equally free to thrive or starve.’ I don’t even know what that means, but when an AI gets to this point, it seems to be only a matter of time until it’s replicating’Taxation is theft’ until it self-destructs. I must admit though, for complicated algorithms, they’re all strangely insistent about across-the-board drug legalization.” Dubicki added that, while AI can be an incredibly useful tool, we should proceed with caution until machines reach a comprehension of human values they don’t become obsessed with constructing an compound on their own personal island.

Looking at AI in the US army, they put out a call to private companies for ideas regarding how to boost its planned semi-autonomous, AI-driven targeting system for tanks. But that speech scared some people that are concerned about the increase of AI-powered machines that were killing. With good reason.

The Army simply added without changing some of the wording. Fully autonomous American murdering machines are not permitted to go around killing people willy nilly. There are principles —or policies, at the least. And those policies will be followed by their bots.

Yes, even the Defense Department is still building robots that are murderous. But those murderous robots need to adhere to the section’s”ethical criteria “

The Additional language:

All development and use of autonomous and semi-autonomous works in weapon systems, including manned and unmanned systems, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, that was updated in 2017. Nothing in this notice ought to be understood to represent a shift in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be assessed to make sure that they are consistent with DoD legal and ethical standards.

Even the Department of Defense Directive 3000.09, requires that humans be able to”exercise appropriate levels of human judgment over the use of force,” meaning that the U.S. won’t throw a completely autonomous robot into a battlefield and let it decide independently whether to kill someone. This safeguard is sometimes known as being”in the loop,” meaning that a person is making the final choice regarding whether to kill someone.

The USA has been using robotic planes as firearms in warfare since at least World War II. Than they are using the robots in the atmosphere, However, for some reason, Americans of the 21st century are concerned about robots on the earth. Maybe we all got scarred by watching films like Terminator 2: Judgment Day–a movie that was much more realistic than we likely imagined at the moment, believing that Darpa was actually hoping to build something like Skynet during the 1980s.

The U.S. military used drones from the Vietnam War, in Iraq during the first Gulf War, in Afghanistan, in Iraq during the second Iraq War, in Syria from the fight against ISIS, along with numerous other states. But those robots are frightening to Americans here in the year 2019.

The Department of Defense is currently going to keep pushing on the technology behind targeting systems like ATLAS to make its weapons more lethal, more intelligent, and agile. But don’t worry, it is going to continue doing all that according to plan. At ease, sarg.

Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *