Algor-Ethics: The Implications of AI in AV and Digital Signage Cannot Be Ignored

ai regulation


We have all heard about the 800-pound gorilla in the room and that it should not be ignored. Nobody will claim that AI is being ignored, but a significant aspect of the rapidly evolving technology is not as exposed to scrutiny as it should be. I am speaking about the need for regulation and controls. As the MBAs among us love to say, if you can’t measure it, you can’t manage it. The task of measuring and managing AI makes the 800-pound gorilla look like a newborn chimp… but we must do something!

Let’s be clear: this focus is not a scare tactic aimed at curtailing the impact or development of AI. The genie is out of the bottle and will not be going back in any time soon. Most of us would agree that unfettered growth for any industry or technology, sans regulation and controls of any sort, is (or can be) a bad thing. AI is far from bad… but it can be. I want to give you a few points for further discussion among yourselves that illustrate the point that regulation and control (what it will be is still very TBD!) should be a top-of-mind issue to address the areas outside the bottle where the genie roams.

I want to start with the business of AI. Numbers, numbers everywhere… but who to believe? Bloomberg Financial claims a CAGR of over 45% over the next ten years. The team at Markets and Markets forecasts a 37.5% CAGR from 2024 to 2030. The most conservative growth estimates over the next 3 to 5 years will show over 25% growth year over year! This is your 800-pound gorilla!


One company (accompanied by 4 or 5 others) best illustrates the point of explosive growth. That company is Nvidia. It produces some of the top-of-the-line chips needed to train AI models and for other cutting-edge tech. As one noted financial analyst reported, “Nvidia has been the undisputed winner of the past year’s AI stock boom — recently passed Apple to become the second-most valuable public company. By the numbers: The chipmaker’s market cap passed $3 trillion for the first time, putting the company a hair above Apple and behind only Microsoft. Its stock, which has jumped more than 147% for the year, is the strongest force lifting the S&P 500. The big picture: Nvidia’s profit margins are the envy of the corporate world — it made $14.9 billion of net income on revenue of $26 billion last quarter. By contrast, Nvidia’s net income was just $0.7 billion in the final quarter of 2022.”

Nvidia is not alone in the trillion-dollar market capitalization club. You also have Microsoft, Apple, Google and Meta. The companies’ individual and collective growth figures are staggering, to say the least. This is an entire troop of 800-pound gorillas!

All the companies’ PR departments are working overtime to characterize activities in the most favorable way possible. The departments all start with the tangible benefits (yes, they are tangible, BTW!). Still, when questioned about so much power in so few companies, they all give lip service (sans solutions) to the need for regulation and controls. Most of these behemoth companies have been before Congress telling our lawmakers just that. This begs the question of whether our lawmakers will actually understand the severity of the situation (both pros and cons) or even attempt to gain some level of consensus and act on regulation and controls. The skeptic in me is doubtful in this day and age of political polarization.

The good news (or a more optimistic outlook) came from the recent G7 summit in Italy. For those unfamiliar with the G7, it is an informal forum of heads of state and government. The G7 comprises Canada, France, Germany, Italy, Japan, the UK and the US. In addition, the European Union is represented at all meetings. The summit meeting is the highlight of a G7 year. At these summits, the G7 heads of state and government discuss key global policy issues, exchange views and work together to develop constructive solutions. Over the years, a closely woven process of political consultation between the governments of the member countries has developed around the summit meetings.

Summary of the G7’s 11 Guiding Principles on AI:
1. Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate and mitigate risks across the AI lifecycle.
2. Identify, report and mitigate patterns of misuse after deployment, including placement on the market.
3. Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use in order to help ensure sufficient and ongoing transparency, and thereby contribute to increased accountability.
4. Work towards responsible information-sharing and reporting of incidents among organizations developing advanced AI systems, including industry, governments, civil society and academia.
5. Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies and mitigation measures, in particular for organizations developing advanced AI systems.
6. Invest in and implement robust security controls, including physical security, cybersecurity and insider-threat safeguards across the AI lifecycle.
7. Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, including watermarking or other techniques to enable users to identify AI-generated content.
8. Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.
9. Prioritize the development of advanced AI systems to address the world’s greatest challenges, such as the climate crisis, global health and education.
10. Advance the development and adoption of international technical standards.
11. Implement appropriate data input measures and protections for personal data and intellectual property.

See related  Discussing the Implications of AI and a Call to Action

To emphasize the importance of AI and how it affects humanity, Pope Francis was invited by G7 host country Italy to address a special session at their annual summit. He became the first pontiff to address a G7 summit, in this case to punctuate the perils and promises of AI. His remarks were not about religion but focused on humanity and ethics and what we, as a human race, should consider.

The AP reported that Pope Francis challenged leaders of the world’s wealthy democracies to keep human dignity foremost in developing and using artificial intelligence, warning that “such powerful technology risks turning human relations themselves into mere algorithms.” He began by saying that the birth of AI represents “a true cognitive-industrial revolution” which will lead to “complex epochal transformations”.

These transformations, the Pope said, have the potential to be both positive (for example, the “democratization of access to knowledge” and the “exponential advancement of scientific research” and “a reduction in demanding and arduous work”) – and negative (for instance, “greater injustice between advanced and developing nations or between dominant and oppressed social classes.”)

Francis said politicians must take the lead in ensuring AI remains human-centric so that humans and not machines always make decisions about when to use lethal and less-lethal tools.

“We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives by dooming them to depend on the choices of machines,” he said. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.”

By attending the G7 summit, Francis joined a chorus of countries and global bodies pushing for stronger guardrails on AI following the boom in generative AI. The G7 final statement largely reflected his concerns, both positive and negative. The leaders vowed to better coordinate the governance and regulatory frameworks surrounding AI to keep it “human-centered.” At the same time, they acknowledged the potential impacts on the labor markets of machines taking the place of human workers and on the justice system of algorithms predicting recidivism.

“We will pursue an inclusive, human-centered, digital transformation that underpins economic growth and sustainable development, maximizes benefits, and manages risks in line with our shared democratic values and respect for human rights,” they said.

There’s one final group of remarks from the Pope on the “politics of AI” that I want to share.  A particular concern of his is that today, it is “increasingly difficult to find agreement on the major issues concerning social life.” He points out that there is less and less consensus regarding the philosophy that should be shaping artificial intelligence. What is necessary, therefore, the Pope said, is the development of what he termed “algor-ethics,” a series of “global and pluralistic principles which are capable of finding support from cultures, religions, international organizations and major corporations.”

“If we struggle to define a single set of global values,” the Pope said, “we can at least find shared principles with which to address and resolve dilemmas or conflicts regarding how to live.”

Faced with this challenge, the Pope said, “Political action is urgently needed. Only a healthy politics, involving the most diverse sectors and skills, is capable of dealing with the challenges and promises of artificial intelligence.”

The goal, Pope Francis concluded, is not “stifling human creativity and its ideals of progress but rather directing that energy along new channels.”

If this all sounds like a daunting set of tasks, it truly is. In many ways (not unlike the global climate discussions), this one is existential. Addressing these challenges will affect our lives more than anything we have been exposed to since the industrial revolution, electricity and the internet. In the United States, President Joe Biden appears to take it seriously. He issued an executive order on AI safeguards and called for legislation to strengthen it. Some states like California and Colorado have been trying to pass their own AI bills, with mixed results. If I sound skeptical, I am. When was the last time our politicians got together with something we might call a leading-edge approach to anything? The point I want to leave you with is this: AI has significant benefits in too many areas to mention, but the use of AI must be controlled and regulated. It is to be all that it can be and yet not harm those it is intended to serve. Let the discussions begin!