The Dangers of Artificial General Intelligence (AGI)

The Dangers of Artificial General Intelligence (AGI)

AGI Artificial General IntelligenceAGI – Artificial General Intelligence: The intelligence of a (hypothetical) machine that could successfully perform any intellectual task that a human being can.

Last year, Elon Musk of SpaceX and Telsa, outlined his concerns about the future of powerful AIs and how they could become Skynet-like. He explained how he believes we need to be extremely careful with developing them because we need to ensure they are programmed with a code of ethics and who programs them and what ethics and morality they adhere to is the biggest question. In a Tweet, Musk said: “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” He is of course talking about humanity being nothing more than a precursor to a super God-like artificial intelligence. That’s really the big thing to consider here. In a recent Sam Harris podcast, he talked about this conundrum.

To understand this problem, you only have to grant two thing. 1. That we will continue to make advancements in hardware and software and 2. That there is nothing extraordinary about intelligence and sentience. Intelligent machines can be built from non organic material. If we begin to surrender to robots and AIs in every facet of our civilisation, then it’s not just manual labourers who will find themselves made redundant from their jobs, but every other industry also, including white collar and high brow intellectual jobs.

For a brief time, a rich minority will enjoy the unimaginable wealth that such widespread automation will offer them in this new technocracy. But most everyone else will slowly begin to descend into abject poverty. Moreover, Harris suggests the majority may live in a regressed police state under the watchful eye of drones controlled by the AGI. An AGI that could replace government entirely. We would be outsourcing all of our problems, personal and global to this machine to solve for us. Removing our own individual and collective human agency.

You also must consider that this AGI could operate and evolve at an incomprehensibly fast timescale. If it had the ability to improve itself and become more efficient. It could perform thousands of years-worth of human-level intellectual work in just a matter of days or weeks. It will literally advance intellectually to a point where its intelligence, when compared to that of a human’s, will be like comparing our intelligence to farmyard animal. If it really could learn and evolve that quickly, could we predict its thoughts and actions and would we be able to control it? Harris asks if it could really be kept content enough to continue to take direction from us at all?

The values instilled in this AGI would have to be very carefully decided upon and even then, there’s no guarantee it would remain adherent to any strict doctrine or guidelines we lay down for it.

If we attempt to build a utopia controlled, monitored and governed by the AI, would we be in effect, building an Orwellian dystopia? That could be the best case scenario. The worst case scenario, the machines begin to see humanity as a nuisance, a threat or better off put out of its misery. They begin to experiment on humans for their own end goals, enslave us or just destroy us.

Those few who merge with machines, assimilated in a kind of Borg-like collective, will be the abominable remnants of the human race.

For more tech content, subscribe to Computing Forever on YouTube.

This entry was posted in Computing forever, Feature Article, News, Video and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *