AI shaping our choices, minds

AI algorithms continue to prioritise engagement over ethics.

Artificial intelligence (AI) has become an integral part of modern life, transforming how we shop, consume and interact with technology.

However, while these advancements offer convenience and personalisation, they also come with significant drawbacks, which often go unnoticed. Generative AI has accelerated a longstanding process of internet-driven dehumanisation, stripping individuals of choice and autonomy. In Zimbabwe and across the globe, this issue demands urgent attention.

Rise of AI-powered systems

The seeds of this transformation were sown in the early 2000s with the emergence of AI-driven recommendation systems. Companies such as Amazon, Netflix, and YouTube pioneered this technology, changing how content and products were presented to users. Amazon, for example, described its recommendation system as a digital store tailored to each customer. As the company famously explained, it is akin to walking into a shop where the shelves rearrange themselves based on what the store predicts you will want. While this might sound innovative, it raises troubling questions about the loss of individual agency.

These systems were designed to predict what users would want next, effectively choosing for them and reducing their decision-making autonomy. By reshaping choices, these platforms introduced a subtle yet pervasive form of control that has since been normalised.

Dark side of personalisation

The influence of recommendation systems became even more pronounced with their integration into social media. YouTube was among the first platforms to embrace AI-driven recommendations, boasting of its ability to sift through billions of videos to deliver tailored content. However, this algorithmic curation actively shapes what users see, influencing how they spend their time and, crucially, how they think. Instead of simply offering a broad range of options, these systems amplify certain types of content based on calculated interests, often prioritising sensationalism over substance.

Facebook joined the race in 2006 with its News Feed, introducing personalised updates visible only to individual users. Although marketed as a way to keep people connected, concerns about data collection and privacy quickly surfaced. Moreover, by prioritising divisive and emotionally-charged content, Facebook’s algorithm incentivised sensationalism. According to whistle blower Frances Haugen, Facebook’s systems exploit anger and divisiveness, encouraging publishers to create polarising content for financial gain. The result is a vicious cycle of negativity that erodes social cohesion and mental well-being.

Misinformation, privacy erosion

The harmful effects of AI-powered recommendation systems are wide-ranging and significant. Studies reveal how these systems contribute to polarisation, manipulation, and misinformation. A 2023 study on YouTube found that an increasing number of recommendations stem from problematic sources, including conspiracy theorists and extremist groups. Although the proportion of such recommendations remains relatively small, they still reach a large percentage of users, often reinforcing biases and deepening ideological divides.

Children are also disproportionately affected. A 2024 study of YouTube video thumbnails revealed that search terms popular with children frequently resulted in attention-grabbing but inappropriate or even harmful content. Despite efforts to moderate these issues, AI algorithms continue to prioritise engagement over ethics. Furthermore, the pervasive use of personalisation raises alarming privacy concerns. Platforms collect vast amounts of personal data to fuel their algorithms, often with little transparency. Users are left in the dark about how their information is used, leading to justified fears about data security and misuse.

Wait a minute, who controls the algorithms?

The underlying issue is that companies have prioritised profit over ethical responsibility. AI-powered recommendation systems are designed to maximise engagement, as this drives revenue. While terms such as “responsible AI” have become buzzwords in corporate circles, many platforms fall short when it comes to offering users real control over these systems.

Instead of providing tools that allow users to understand and influence how recommendations are generated, tech companies double down on their automated systems. Features such as control panels or options to disable AI-powered recommendations are notably absent. By presenting these systems as indispensable, companies perpetuate the myth that users would be lost without them. In truth, the lack of alternative mechanisms further disempowers individuals.

Way forward

If AI is to be a force for good, its implementation must be reimagined. Companies must prioritise transparency, providing users with detailed information about how algorithms work and why specific recommendations are made.

Building intuitive control panels should be standard practice, enabling individuals to adjust the data parameters influencing their AI-powered experiences. Most importantly, users should have the option to disable these systems entirely, reclaiming their autonomy.

However, expecting companies to regulate themselves is unrealistic. Self-regulation often conflicts with the profit-driven motives of tech giants. To address these issues, government intervention and strong regulatory frameworks are essential. Policymakers in and globally must prioritise user rights over corporate interests by enforcing rules that safeguard privacy, combat misinformation, and ensure ethical AI deployment.

Public education is another crucial step. Citizens need to be made aware of the risks and impacts of AI-powered systems. Grassroots initiatives, workshops, and media campaigns can help demystify the technology, empowering people to make informed choices.

The clarion call is on the need to balance between progress and ethics.

AI has the potential to revolutionise lives, but it must be deployed responsibly. Zimbabwe, like many countries, stands at a crossroads. By addressing the ethical challenges posed by AI-powered recommendation systems, we can ensure that technology serves humanity rather than undermines it. The path forward requires collaboration between governments, tech companies, and civil society. Transparency, accountability, and user empowerment must become non negotiable principles in the age of AI. It is time to eliminate systems that restrict choice and embrace a vision of AI that respects and enhances individual autonomy. Only then can we truly harmonise technology with humanity’s best interests.

  • Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — [email protected]; LinkedIn: @Dr. Evans Sagomba; X: @esagomba.

Related Topics