AI evolution: What’s the end game?

At its core, AI is a system or programme embedded in a machine, designed by humans to mimic human intelligence.

Imagine a world where machines can think, learn, and act independently. Welcome to the era of artificial intelligence (AI). 

Recently, I had the opportunity to engage with colleagues from various fields, and our conversation sparked a profound question: what is the end game for AI? As I reflected on my previous article, AI is here to stay, so is real intelligence, I realised that our focus has shifted from how we utilise AI to what it will do once it is empowered with our collective knowledge. This inquiry prompted me to delve deeper into the essence of AI. I discovered that the AI tools and programmes we are currently familiar with are significantly inferior to what is on the horizon. Some experts assert that the next generation of AI has already arrived in 2025, while others predict it will emerge between 2040 and 2075. As we stand at the threshold of this revolution, it is crucial that we consider the implications of AI’s rapid evolution.

Background

The quest for simplicity and efficiency has driven humanity to create AI. By automating repetitive tasks, humans have empowered machines to enhance worker productivity, fostering a symbiotic relationship between humans and machines. This collaboration is the core of AI. While challenges accompany each breakthrough, they must not overshadow the transformative benefits that human-machine collaboration will bring to businesses. 

However, challenges often arise from human intelligence oversight during the inception stage of AI development. It is crucial that humans are held accountable and guided by ethical principles when creating AI systems. After all, machines can only operate independently based on their programming.

As we allow machines to evolve, it is essential that we consider every possible scenario to prevent unintended consequences that may lead to chaos in business. Unfortunately, this ideal scenario may be just that — an idealistic wish, rather than a realistic expectation.

Definition

At its core, AI is a system or programme embedded in a machine, designed by humans to mimic human intelligence. This fundamental capability of machines to perform specific tasks within predetermined contexts is known as artificial narrow intelligence (ANI).

We can see ANI in action in various applications, such as AI-powered chatbots that provide customer support and answer frequently asked questions.  ANI is a tool that can be easily manipulated or controlled by humans, as machines lack the ability to think independently. The controllable nature of ANI has made it an ideal tool for various industries, including education, healthcare, finance, transportation and manufacturing.

For example, AI-powered diagnostic tools play a crucial role in the vehicle manufacturing industry, while AI-driven GPS tools optimise navigation, getting drivers to their destinations more efficiently. These applications demonstrate the vast potential of ANI in various sectors. As AI continues to advance, we can anticipate substantial enhancements in these industries, leading to increased efficiency, productivity, and innovation, ultimately transforming the way businesses operate and creating new opportunities for growth.

From ANI to AGI

The ultimate goal of AI creators is to create machines that can exhibit traits associated with human intelligence (HI), that is real intelligence (RI), such as learning and problem solving. 

The emergence of artificial general intelligence (AGI) marks a significant milestone in the evolution of AI. AGI empowers machines to surpass their predetermined limitations, enabling them to comprehend complex data, learn from experience, and apply knowledge in a manner similar to RI. This capacity for independent learning and problem solving is reminiscent of a child’s ability to learn from observation and experience.

As we witness the rapid advancement of AGI, we are compelled to confront the unsettling prospect of machines operating autonomously. The reality is that AGI is no longer a fictional concept, but a rapidly approaching certainty that demands our attention and consideration.

Machines operating autonomously

These are legitimate concerns about the potential risks and consequences of AGI. This apprehension stems not from ignorance or resistance, but from a deep understanding of human dynamics and their capacity for manipulation. One of the primary concerns is that AGI could be manipulated to become uncontrollable, leading to decisions that are detrimental to humanity. This fear is often referred to as the “existential risk” of AI.

The possibility of AGI being misused is unsettling. For instance, what if a company’s management report, detailing the negative impact of competition on their operations, is interpreted by an AGI system as a directive to eliminate the competition? In a worst case scenario, this could lead to autonomous drones being deployed to take out rival companies, simply because the AGI system has determined that this is the most efficient solution to the problem.

Another concern is that AGI could exacerbate existing social issues, such as job displacement, income inequality, bias, and increased poverty levels. For instance, AGI-powered automation could displace certain jobs, particularly those that involve repetitive or routine tasks. This could lead to significant job losses and exacerbate income inequality.

Imagine a scenario where AGI has advanced to the point where it can perform all tasks without human aid. In hospitals, for instance, machines could have full control over who receives medication and who does not. The likely outcome is that the affluent would be the primary beneficiaries of resource allocation, as the machines would calculate and conclude who is a risk factor in meeting financial obligations based on income data, class, location, and demographics.

This creates a serious concern about who should be at the forefront of determining how AGI is programmed to meet organisational goals. Should it be management or shareholders, who often prioritise profit over people, or data scientists who see binary digits? The answer is crucial, as it could determine the fate of humanity in an AGI-driven world.

Mitigation strategies

While these concerns are valid, it is essential to acknowledge that they can be mitigated through careful planning, design, and regulation. Researchers are actively exploring ways to develop AGI systems that embody transparency, explainability, and fairness. This includes creating algorithms that can detect and prevent bias, as well as designing systems that provide clear explanations for their decisions.

With the accurate programming of these algorithms, AGI systems can be designed to operate within established business ethics frameworks. For instance, in recruitment, AI tools can be programmed to ensure fairness and equity in the hiring process, eliminating biases and ensuring that candidates are selected based on merit alone. By prioritising responsible AI development, we can harness the benefits of AGI while minimising its risks.

Governments and regulatory bodies play a vital role in mitigating the risks of AI. By establishing clear guidelines and regulations, they can ensure that AI systems are developed and deployed in a responsible and safe manner. This involves vetting companies leading the AGI charge, verifying their compliance with regulatory requirements, and ensuring that they prioritise transparency and accountability.

Furthermore, governments and regulatory authorities must also invest in hiring competent employees who are well-versed in AI development. This will enable them to make informed decisions and provide effective oversight. To foster more inclusive and equitable AI development, it is essential to assemble diverse teams of developers from various demographics. This will allow machines to learn from a wide range of cultural perspectives, promoting universal decision-making that is free from prejudice and bias.

Incorporating social scientists and experts from other disciplines into the programme development process can help retain the human element in AI decision-making. By doing so, we can ensure that AI systems are designed with empathy, compassion, and a deep understanding of human values. This is a critical step towards creating an AGI era that benefits all people, regardless of their background or identity.

Conclusion

The future of AGI is a reality that we must confront and prepare for. As AI continues to evolve and improve, it is essential that we consider the potential consequences and take steps to mitigate the risks. Organisations must resist the temptation to clandestinely programme their machines at the expense of their employees. Instead, they should focus on creating machines that develop and save jobs, while also generating profits for investors.

AI should be designed to improve task performance, reduce decision making time, and create job opportunities that align with the local population’s skills and abilities. AI that displaces local workers, rather than enhancing their productivity, would be a failure in its artificial intelligence.

After all, businesses invest in AI to serve their markets. But a jobless market is not sustainable for any business. It is crucial that we prioritise the well being of our human resource base. Businesses must recognise that their employees are also their customers, and that eliminating labour in pursuit of profits is a short-sighted strategy. 

AI may be able to produce goods and services, but it cannot consume them. Without people, there is no business. While valid concerns exist about the potential risks and consequences of AI, particularly those associated with AGI, these can be mitigated through careful planning, design, and regulation. By working together, we can harness the power of AI to create a brighter future.

AI remains a complex and multi-faceted technology with the potential to revolutionise various aspects of our lives. As we have already seen from the positive spin-offs of ANI, and as we await the full potential of AGI, I firmly believe that AI will ultimately benefit both the employer and employee.

  • Kahari is a seasoned recruitment advisor with 20+ years’ experience. He is an AI enthusiast and a published poet, blending industry expertise with creative flair. — [email protected]

Related Topics