The Future of AI Governance, Laws, & Regulations – Why? & How?

A Brief History of AI

The following timeline highlights the contributions of just a few key players, but the field of AI is vast, and many other organizations and researchers around the world have made and are making significant contributions.

  • 1950: Alan Turing proposes the idea of machines that could mimic human intelligence in a paper discussing the possibilities and implications of artificial intelligence. He devises the Turing Test to examine a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
  • 1956: The Dartmouth Conference is held, where the term “artificial intelligence” was coined and the field of AI research is born.
  • 1960s-70s: During this period, the first AI programs are written. These programs include simple AI games and language processors. However, the lack of computational power and data limits their capabilities.
  • 1980s: The AI ‘Winter’ begins, due to high expectations not being met and the consequent reduction in funding. However, AI continues to be used successfully in specific applications.
  • 1990s: The field begins to revive with a focus on practical applications and the use of statistical methods.
  • 1997: IBM’s Deep Blue beats the reigning world chess champion, Garry Kasparov. This is a significant milestone in the development of AI.
  • 2011: IBM’s Watson wins Jeopardy! against two former champions. This demonstrates the capacity for AI to understand and respond to complex natural language queries.
  • 2012: Google’s X Lab develops Google Brain, which notably created a neural network that could automatically recognize cat videos on YouTube.
  • 2014: Google acquires DeepMind, a UK-based AI company with a focus on creating AI that can learn to solve any complex problem without needing to be taught how.
  • 2015: The AlphaGo program, developed by Google DeepMind, beats a human professional Go player, the reigning world champion, and several other highly ranked players, an achievement thought to be a decade away due to the complexity of the game.
  • 2016: Google implements RankBrain, an AI system, to help process its search results.
  • 2018: OpenAI introduces GPT (Generative Pre-trained Transformer), a large-scale, unsupervised, language model that generates coherent paragraphs of text. Meanwhile, Google introduces BERT (Bidirectional Encoder Representations from Transformers), a method for pre-training language representations that significantly improves the state of the art across a wide range of tasks in natural language processing (NLP).
  • 2019: OpenAI releases GPT-2, an improved version of GPT, which is capable of generating even more coherent and contextually relevant paragraphs of text. In the same year, Google’s AI demonstrates its capabilities in real-time voice translation and introduces Meena, a chatbot that they claim can chat about anything.
  • 2020: OpenAI introduces GPT-3, the third iteration of the model, with 175 billion machine learning parameters. GPT-3 shows significant improvements in language understanding and generation.
  • 2021: OpenAI begins to develop the fourth iteration, GPT-4. Google also launches LaMDA (Language Model for Dialogue Applications) aimed at making conversation with AI systems more natural and engaging.
  • 2022: OpenAI introduces ChatGPT, a derivative of the GPT-3 and GPT-4 model, specifically tuned for generating conversational responses. It’s used in various applications, including tutoring, drafting emails, creating written content, translating languages, simulating characters for video games, and much more.
  • 2023: Google launches its chatbot Bard, including image capabilities, coding features and app integration. Google’s chatbot is not presently available in Canada. Apparently, Google is taking time to ensure their AI is compliant with Canadian governance, laws, and regulations.

 

A Brief History of AI Governance, Laws, and Regulations

The governance, laws, and regulations surrounding artificial intelligence (AI) have grown and developed along with the field itself, aiming to address the unique challenges and questions that AI technologies raise. Here’s a brief overview of this history, focusing on some key developments:

1950s – 1990s: Early Days of AI

During this period, AI was a topic of academic research with little real-world application or impact. There were no specific laws or regulations on AI. Any oversight that did occur was indirect, happening through normal research ethics and funding channels.

2000s: The Internet Era

With the rise of the Internet and digital technologies, AI started to have more practical applications, particularly in areas like data analysis and digital advertising. However, regulation was still mostly nonexistent or indirect. Privacy laws, for example, started to influence how companies could use AI to process user data.

2010s: The Big Data Era

As AI became more sophisticated and began to impact society more significantly, calls for regulation grew. Notable milestones include:

  • 2016: The European Union adopted the General Data Protection Regulation (GDPR), which came into effect in 2018. The GDPR doesn’t focus specifically on AI, but it has significant implications for AI systems, which often rely on personal data. It gives individuals the right to know when AI is being used to make decisions that affect them, among other protections.
  • 2018: The U.S. Congress introduced the Algorithmic Accountability Act, which aimed to require companies to conduct impact assessments of automated decision systems and any decisions made by them. This was one of the first attempts in the U.S. to specifically regulate AI.


2020s: AI in Everyday Life

AI began to play an increasingly significant role in everyday life, leading to more concerted efforts at regulation. Here are some notable developments:

  • 2020: The U.S. Department of Defense adopted ethical principles for AI use, setting out guidelines for the responsible use of AI in a military context.
  • 2021: The EU released a proposal for a comprehensive AI Act, which would be the first legal framework on AI. The proposal included risk-based rules for AI systems, with stricter requirements for higher-risk systems.
  • 2022 and beyond: There have been ongoing debates about how to regulate AI in a way that balances innovation with ethical concerns, such as privacy, bias, and transparency. These debates involve multiple stakeholders, including governments, companies, civil society, and academia.

The field of AI governance is emerging and evolving rapidly. As AI technology continues to advance and be integrated into more aspects of society, the need for effective governance and regulation becomes increasingly important.

The Future of AI Governance, Laws, and Regulations

Why AI requires Expanded Governance, Laws, and Regulations

AI brings many advantages, but it also poses significant challenges and raises a variety of concerns. Here are some key concerns and the ways in which they indicate the need for expanded governance, laws, and regulations around AI:

  1. Privacy: AI often processes extensive datasets, some of which may encompass personal data. This invokes questions about data collection, use, and safeguarding, and calls for stringent regulations to ensure companies honor privacy, seek consent for data utilization, and protect data from potential breaches.
  2. Bias and Fairness: AI can inadvertently reinforce or intensify biases present in the training data, leading to unjust consequences in areas like employment, finance, and law enforcement. Legal and regulatory measures can foster transparency and accountability in AI, ensuring bias and fairness testing.
  3. Transparency and Explainability: Many AI systems, notably deep learning ones, operate as “black boxes,” making decisions without clear reasoning. This opacity complicates accountability. Regulations could mandate certain degrees of explainability in AI systems, particularly in crucial sectors like healthcare and policing.
  4. Accountability: Determining responsibility when AI systems make decisions or take action can be complex, especially if the system was independently learning and evolving. Regulatory frameworks can help delineate liability and accountability when AI systems cause harm.
  5. Impact on Employment: AI and automation might replace human workers, especially in roles involving repetitive tasks. This necessitates governance addressing the societal implications of AI, including upskilling programs, social security nets, and potential regulation on the deployment of AI.
  6. Security: AI systems may be susceptible to threats such as adversarial attacks, which trick AI into making wrong decisions. Additionally, the malevolent use of AI, like deepfakes, is worrisome. Regulations can set minimum security requirements for AI systems and limit harmful AI applications.
  7. Ethical Dilemmas: AI introduces ethical issues, such as preserving human autonomy when interacting with AI and aligning AI systems with human values. Regulatory frameworks can integrate ethical guidelines and principles for AI.
  8. Concentration of Power: The power over AI and data resting with a few tech giants is a concern. Regulation could ensure fair competition, possibly by enforcing data sharing requirements or breaking up monopolies.
  9. Autonomous Weapons: Autonomous AI systems in weaponry and warfare raise grave ethical and safety concerns. International laws and treaties may be necessary to prevent an AI arms race and ensure the responsible use of lethal autonomous weapons.
  10. Impact on Children’s Education: The integration of AI in education can lead to concerns over quality of education, data privacy for minors, and over-reliance on technology. Governance and regulations can ensure quality standards, protect children’s data, and ensure a balanced approach to the use of AI in education.
  11. Proliferation of Fake News and Misinformation: AI can be used to generate or spread fake news and false information, manipulating public opinion and undermining democratic processes. Regulations could require platforms to take steps to identify and remove AI-generated misinformation.

These concerns emphasize the necessity of a balanced regulatory approach that supports innovation while addressing potential downsides and ensuring the ethical and responsible use of technology.

How Expanded Governance, Laws, and Regulations of AI Could Evolve

Creating comprehensive governance, laws, and regulations for AI is a complex and evolving process that requires the collaboration of various stakeholders, including governments, researchers, industry leaders, and civil society organizations. While it’s not possible to provide a definitive list, here are some key principles and recommendations for effective AI governance:

  1. Ensure transparency and accountability: AI systems should be transparent in their decision-making processes, and developers should be accountable for the consequences of their AI applications. Regulations should encourage thorough documentation and clear explanations of AI algorithms and their decision-making processes.
  2. Protect privacy and data security: AI systems often require vast amounts of data to function effectively. Regulations should ensure that data collection, storage, and use are compliant with privacy laws and maintain the security of individuals’ personal information.
  3. Address bias and discrimination: AI systems can unintentionally perpetuate existing biases and discriminatory practices. Regulations should require developers to minimize bias, assess potential discriminatory impacts, and ensure that AI systems are fair and equitable in their treatment of all users.
  4. Promote safety and reliability: AI systems should be designed and tested rigorously to ensure their safety and reliability. Regulations should establish standards and best practices to minimize the risk of accidents, errors, or other unintended consequences.
  5. Encourage explainability: AI systems should be designed in such a way that their decision-making processes can be understood and explained by humans. Regulations should promote the development of explainable AI, as well as facilitate communication between AI developers and users.
  6. Establish ethical guidelines: AI developers should adhere to ethical principles, such as respect for human autonomy, prevention of harm, and fairness. Regulations should encourage the development of ethical guidelines that can inform AI development and use.
  7. Foster collaboration and open dialogue: AI governance should be a collaborative effort involving various stakeholders. Governments, researchers, industry leaders, and civil society organizations should work together to develop regulations that address the unique challenges posed by AI.
  8. Support research and innovation: Regulations should strike a balance between protecting public interests and promoting AI research and innovation. Governments should invest in AI research, education, and infrastructure to ensure their countries remain competitive in the AI landscape.
  9. Establish oversight and enforcement mechanisms: Governments should create appropriate oversight bodies to monitor AI development and use, as well as enforce regulations and guidelines. These bodies should have the authority to impose penalties for noncompliance and ensure that AI systems adhere to established standards.
  10. Adapt to technological advancements: AI is a rapidly evolving field, and regulations should be flexible and adaptable to keep pace with new developments. Policymakers should continuously review and update regulations to ensure they remain relevant and effective in addressing the challenges posed by AI.

Together our conversations can expand solutions and value

We look forward to helping you bring your ideas and solutions to life.
Share the Post:

Leave a Reply

Your email address will not be published. Required fields are marked *