Navigating through muddy waters of AI regulation (II)

In most African countries, AI policies are not yet in the public domain.

AT the time of writing, less than 10 countries in Africa — South Africa, Rwanda, Egypt, Mauritius, Morocco, and Sierra Leone — have developed and adopted national artificial intelligence (AI) strategies.

Countries such as Uganda and Tunisia appear to be in the process of drafting their AI policies. 

However, it is encouraging to note more and more African countries have or are in the process of establishing expert commissions, task forces, or regulatory authorities to guide the country’s adoption of AI.

Nevertheless, it is very worrying that less than 15 African countries have explicitly recognised AI as a priority area that requires urgency in their national development plans, Zimbabwe included. 

The big question is what are African countries waiting for? Research shows that many African countries are either still contemplating or at the conceptualisation stages of drafting their AI policy initiatives. 

What is unambiguous is that in most African countries, the AI policies are either not yet or still not in the public domain. This shows that the development of AI frameworks is either opaque or that the process is taking too long to be finalised, or both. 

To some of us, who are following the developments in the field of AI, this is truly concerning and implies that African governments are not seeing the urgency that is required in the development of AI regulatory frameworks.

Way forward for Zimbabwe

In last week’s article, I proposed the development and implementation of an AI governance and regulatory framework (AIGRF) and the establishment of an AI Regulatory Authority (AIRA). 

Once the government of Zimbabwe through the Ministry of Information Communication Technology, Postal and Courier Services (ICT), has decided to regulate, the next big question to be answered is what forms of regulation to pursue. 

This decision is particularly delicate, as the idea of regulating a complex, rapidly evolving, and often misunderstood AI technology carries the risk of  what we refer to as “regulatory misalignment”. 

Most AI researchers and consultants, including myself, are highlighting this concern because such misalignment usually occurs when government regulatory goals or the accompanying unintended consequences fall short of addressing the harmful issues they are intended to target in the first place. This is also when they end up introducing unacknowledged compromises between different objectives. 

Having said all these, I propose the following AI regulatory rules; disclosure, registration, licencing, and auditing. 

However, before I delve into the intricacies of each, it is important to highlight that each of these regimes has regulatory alignment issues that need to be acknowledged and addressed. 

We must also accept that all AI-related concerns (such as mandating transparency, fairness, privacy preservation, accuracy, and explainability) cannot be achieved at the same time. 

In the case of government regulations, the primary task of the legislator is to first identify the goals of regulating AI technologies and then propose the law that is suitable to achieve the set goals. 

This is called the principle-based approach (PBA), here the chosen laws should seek to uphold essential principles. 

The legislators can look at developing the law to take a more pragmatic stance, focusing primarily on risk mitigation. This is called the risk-based approach (RBA). 

These two approaches should not be seen as mutually exclusive. A good example is the European Union (EU), which has developed and adopted a risk-based approach (RBA) at the same time while still upholding principles, such as human oversight. 

Equally, China on the other hand has adopted a principle-based approach (PBA), which imposes general principles and rules that are independent of actual risk levels.

Secondly, when regulating an emerging technology such as AI, the legislators or the government of Zimbabwe should ask themselves whether it is regulating the technology itself or it should be regulating its applications. 

This argument to focus on regulating AI technology is based on the understanding that the technology itself may be fundamentally dangerous and that the risks associated with AI technologies can be managed by the regulation of its developers. 

When it comes to regulating AI applications, the government assumes that the AI technology itself is not dangerous and that the potential risks only come from its use, thus putting the focus on deployers and users. 

When Zimbabwean legislators decide to produce a law that specifically targets technology, the law must provide clear, precise technical definitions and specifications. 

This task has proven to be very challenging, as exemplified by many AI regulatory frameworks that provide relatively broad and vague definitions of general-purpose AI models.

Contrariwise, when the legislators decide to focus on regulating AI applications, it necessitates that they should anticipate the possible use of the technology and their potential risks. 

This is also a very challenging task given the rapid and sophisticated advancements in AI technologies. Again it is important to highlight that these two approaches (PBA and RBA) are not mutually exclusive. 

Hence most AI regulatory frameworks such as the EU’s AI Act, incorporate both PBA and RBA: One primarily focuses on the use and they are determined by sectors and classified by their degree of risk. The other set addresses the issue of general-purpose AI models, which are considered to present particular risks due to their advanced capabilities.

Third, there are critical decisions that should be made. 

The decision to regulate necessitates a careful arbitration between various options for substantive measures. 

When it comes to AI, several questions arise and they need to be attended to. The first question is on the process for releasing AI models and systems; should developers be permitted to release their AI models without any form of oversight? 

Should developers self-declare and self-assess their AI models? Or should developers be subjected to a prior rigorous control system and obtain the go-ahead to release an AI model from a regulatory agency?

Fourth, another important issue concerns the appropriate regulatory regime for open-source models and applications instead of closed-source models. 

Another question here is: Should a specific regulatory regime be established for the most powerful and capable AI models? 

Additionally, legislators should confront the complex issue of determining the measures to foster innovation and alleviate regulatory burdens. 

The law they put in place must be able to delineate whether liability should be put squarely on infrastructure providers, downstream deployers, or end users and empower regulatory authorities to enforce the law. 

Either way, substantial investment is needed to make sure that the regulatory agency is competent and has the necessary resources and expertise to oversee the activities and practices of AI companies. 

Moreso, the legally permitted penalties must be big enough to motivate or force compliance, even to the biggest among the AI tech companies, which usually have significant financial muscles. 

Legislators should ensure that the technical implementation of the law on AI must be meticulously planned, as any legal framework concerning technology must be enforced at a technical level. 

Additionally, the regulations must thoroughly detail the technical specifications for the audits and safety tests that will be required.

Join us every week as we delve together into the world of AI. If you have specific areas that you need addressed, please contact the editors or email me directly and the issue will be addressed

Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — Email: [email protected], LinkedIn: @Dr. Evans Sagomba, X: @esagomba.

Related Topics