articles

Home / DeveloperSection / Articles / Will machines rule over humans in future?

Will machines rule over humans in future?

Will machines rule over humans in future?

HARIDHA P110 12-Jan-2024

Some of the world's brightest minds are terrified. If not terrified, as a minimum alarmed. The list is huge. Bill Gates is among them. Stephen Hawking turned it into a gift. Elon Musk is as proper. They all warn towards smart machines that are so proper at what they do because of greater artificial intelligence (AI) that they would damage humans. The problems seem real enough for a number of the neatest people to take a public stance at the problem. In 2015, a letter signed by one hundred fifty eminent scientists, entrepreneurs, writers, and others warned of the risks of artificial intelligence.

There have also been warnings outside of that letter. Stephen Hawking, the past British physicist, suggested that the rise of AI may be the "worst event within the records of our civilization." Elon Musk, the founding father of Tesla, believes that AI development would result in the introduction of an "immortal dictator."

The trouble is that, apart from evidently over-the-top and fable Hollywood blockbusters like Terminator and Matrix, there is little certainty on what direction AI will go. It is viable that human beings will sooner or later resemble flesh baggage or speaking monkeys as a result of outstanding wise machines. But that future is so far away that even the brightest folks can't imagine it in 2018. 

Instead of looking forward to the far-off future – think 1000 years – it's miles simpler to understand how AI may additionally evolve within the subsequent 15 years. And it's reassuring to understand that there might not be a time-touring Terminator in 2035.

The threat of AI is not that people will lose control of intelligent machines and systems. The actual threat is closer to home: some people will gain more power over AI and smart machines.

Consider intelligent machines rather than murderous robots.

There is no end-of-the-world scenario, but AI will improve and smart robots will become exponentially smarter by 2030. All indications lead to such a scenario.

Google Assistant was the most accurate of the four digital helpers, answering all of the queries with an accuracy of 85.5 percent. Siri came in second, understanding 99.5% of inquiries and answering them with a 78.5 percent accuracy. 

The outcomes were outstanding. But the most important message was how far these computers have come in just two years. Notably, what has changed in the last two years is not only the efficiency with which these AIs understand and answer queries, but also their skill set, which has grown by leaps and bounds as a result of the continual cycle of innovation.

These virtual assistants are already present in our phones. And now they are gradually expanding their sphere to include everything "smart." This means they're branching out into smart speakers, smart TVs, smart lighting, smart doors, smart shoes and smart coats

However, as the usage of AI and smart machines grows, it is vital to remember that in all of these cases, AI is utilized as an assistive tool or a technology that can broaden the scope of impact while lowering human labor. At no time is AI delegated as an independent entity with the authority to make decisions on its own.

"The artificial intelligence that you and I will be working with on a daily basis, the idea that machines will become self-aware and decide to kill all humans, is all science fiction," said Professor Mausam of IIT-D's Department of Computer Science.

Mausam goes on to argue that in the future, intelligent machines, such as an armed and intelligent drone, could be used to kill people. "Will a human decide what the AI should do, or will the AI decide for itself?" I believe the behavior of the drone will be decided by a human. 

Fear humans with AI rather than machines with AI.

This is where the difficult answer enters the picture. As Mausam points out, the threat with AI is not that it will become self-aware. The fear is that humans will use it in the scary way that only humans can use something. This is a threat that Yuval Noah Harari so beautifully exposes in his book Homo Deus, in which he claims that few elites will be able to rule over millions via AI-assisted computers, and this will not be a merciful benefactor rule. According to Harari, one of the hazards of AI is that it entirely devalues humans.

In a different light, the immediate danger of AI and smart robots is that they will make the world more bureaucratic and chaotic, rather than more efficient. Smart systems will likewise concentrate power in the hands of a few. One example is India's Aadhaar, which uses a sophisticated biometric recognition technology to identify individuals. Instead of providing people with ID cards, it relies on smart devices that, when unable to identify a person, deem that person as non-existent. More of these smart technologies will be activated in the future.


Writing is my thing. I enjoy crafting blog posts, articles, and marketing materials that connect with readers. I want to entertain and leave a mark with every piece I create. Teaching English complements my writing work. It helps me understand language better and reach diverse audiences. I love empowering others to communicate confidently.

Leave Comment

Comments

Liked By