Center for Long-term Artificial Intelligence
Home
Research
Meet the Team
About CLAI
Center for Long-term Artificial Intelligence
Home
Research
Meet the Team
About CLAI
More
  • Home
  • Research
  • Meet the Team
  • About CLAI
  • Home
  • Research
  • Meet the Team
  • About CLAI

Long-term Artificial Intelligence: Science, Technology and Humanity.

Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.

Open the Future for Long-term AI and Human-AI Symbiotic Society!

Contact Us

Long-term Artificial Intelligence: Science, Technology and Humanity.

Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.Long-term Artificial Intelligence: Science, Technology and Humanity.

Open the Future for Long-term AI and Human-AI Symbiotic Society!

Contact Us

Our Research

Rethinking the Redlines Against AI Existential Risks

The ongoing evolution of advanced AI systems will have profound, enduring, and significant impacts on human existence that must not be overlooked. These impacts range from empowering humanity to achieve unprecedented transcendence to potentially causing catastrophic threats to our existence. To proactively and preventively mitigate these potential threats, it is crucial to establish clear redlines to prevent AI-induced existential risks by constraining and regulating advanced AI and their related AI actors. This paper explores different concepts of AI existential risk, connects the enactment of AI red lines to broader efforts addressing AI's impacts, constructs a theoretical framework for analyzing the direct impacts of AI existential risk, and upon that proposes a set of exemplary AI red lines. By contemplating AI existential risks and formulating these red lines, we aim to foster a deeper and systematic understanding of the potential dangers associated with advanced AI and the importance of proactive risk management. We hope this work will contribute to the strengthening and refinement of a comprehensive AI redline system for preventing humanity from AI existential risks.

AI Capability Building for Promoting Global Development and Governance of AI (Call for Actions)

Artificial Intelligence (AI), as a pivotal technology empowering global economic development, social progress, and ecological governance, carries the mission of promoting the well-being of all humanity. The development, application, and governance of AI are not only focal points of concern for the international community but also critical factors influencing the long-term existence and development of humanity as a whole. Here we call upon governments, international organizations, academia, industry, and individuals from all sectors of society to uphold a human-centric approach and adhere to the principle of developing AI for good, working together to advance the global capacity building process of AI. Through promoting and strengthening broad and inclusive international cooperation in the field of AI, exploring collaborative frameworks, bridging development gaps, eliminating digital divides, and maximizing complementary advantages, we aim to harness human wisdom and power to jointly foster the robust development and collaborative governance of global AI, striving towards the goal of utilizing AI to benefit all humanity.  See the full Call https://long-term-ai.center/research/f/ai-capacity-building-for-global-development-and-governance-of-ai

The AGILE Index: the Landscape and Status of Global AI Governance

The rapid advancement of Artificial Intelligence (AI) technology is profoundly transforming human society and concurrently presenting a series of ethical, legal, and social issues. As artificial intelligence (AI) technologies, especially generative AI and large language models, become more important in our daily lives, the need for strong governance frameworks has never been more obvious. The AI Governance InternationaL Evaluation Index (AGILE Index) for 2023 is a key tool in this area, which offers a detailed look at how 14 countries are handling AI governance. The global community is actively engaging in addressing the unique challenges presented by these advancements, underscoring the growing recognition of the importance of a coordinated international approach to AI governance.  Download the report https://agile-index.ai/publications

Principles on Symbiosis for Natural Life and Living Artificial Intelligence

  The development of Artificial Intelligence and Augmented Intelligence will make life take a variety of forms, and the development vision of Human-centered Artificial Intelligence and Augmented Intelligence needs to be refined as soon as possible and better prepared as possible to cope with future changes and evolution. In the future society, the development of Artificial Intelligence, Augmented Intelligence, Digital Technology, Brain and Neuroscience will very likely enable the creation of intelligent lives that reach or even exceed the intelligence level of humans that are created by natural evolution, and the traditional human beings will probably no longer be at the top of the intelligence pyramid. How human beings can form a sustainable symbiotic society with natural life including themselves, non-human animals, plants and other types of living artificial intelligence is an important issue related to human existence, which will bring fundamental challenges to human survival and transformative opportunities for human development. 

  Ethics in a sustainable symbiotic society will not be limited to how humans should and should not interact with other humans, but will extend at least to how members of the "human-based" core of a sustainable symbiotic society that is with different forms of living intelligence relate to each other, and to give designs for how to relate to other life besides them. This study presents basic principles for the types of intelligent life in the "human-based" core and their interactions with other life in the sustainable symbiotic society.  Please visit "Principles on Symbiosis for Natural Life and Living AI". 

The AI for SDGs Cooperation Network: Weaving the Web of Efforts on Global AI for SDGs

The AI4SDGs Cooperation Network is aimed at advancing UN Sustainable Development Goals and Digital Cooperation through AI Innovation and Partner Networks with AI related institutions, universities, AI industries etc. all over the world. It will collectively promote the realization of SDGs through AI, advancing the development of the 17 areas of SDGs with beneficial use of AI, and avoid the negative impacts. The AI4SDGs Cooperation Network will especially support the underdeveloped countries and regions, and promote the realization of leaving no one behind.  The AI4SDGs Think Tank and Cooperation Network is led by Center for Long-term Artificial Intelligence (CLAI) with supports from various academic institutions, and industries as partners all over the world. For more details, please visit the AI4SDGs Cooperation Network website .
 

Whether We Can and Should Develop Strong Artificial Intelligence: A Survey in China

The purpose of this survey is to objectively present the different opinions and situations of scholars and practitioners in different fields regarding the feasibility and necessity of developing Strong AI based on Chinese participants. This survey aims to provide a reference for relevant research and further discussions among scholars and the general public with different backgrounds. The survey was conducted in 2021 from May to July. At that time, concepts such as "Strong AI" and "Artificial General Intelligence" were not as widely recognized and discussed as they are at the time of the official release of this report. Therefore, the survey results can better reflect the original understanding of the survey participants’ original thoughts. Whether We Can and Should Develop Strong Artificial Intelligence: A Survey in China

Cross-Cultural AI Ethics and Governance: Trust, Cooperation, and Symbiotic Future

 Realizing global sustainable AI development requires international cooperation on AI infrastructures, ethical and governance frameworks and mechanisms of interactions and coordination, especially taking into account different cultural perspectives. Currently, there are many obstacles to achieving this goal, such as relatively lower interests to appreciate differences in cultures and values, distrust between cultures and coordination challenges across regions. This forum will discuss these challenges and explore how to increase cross-cultural trust, cooperation for AI ethics and governance between different countries and regions, share experiences, learning in a complementary view, promote common values for human good, and lead us to a symbiotic future. For more information:  The 3rd International Workshop on Cross-cultural AI Ethics and Governance

Brain-inspired Conscious Living AI: A Mirror to Humankind

 For the long term quest on the scientific nature of Artificial Intelligence, we need to explore the mechanisms of the biological brain and mind and use them as sources of inspirations to create computational models for cognitive intelligent machines and future computational conscious living AI.  We need to realize that human and future brain-inspired conscious living AI form a Human-AI symbiotic society.  Click the full version here   Brain-inspired Conscious Living AI: A Mirror to Humankind 

Human-AI Symbiosis

This research focus on Philosophy, technologies, and policies for AI-enabled Sustainable Symbiotic Society. The first result focus on Human-AI Symbiosis, especially for how the future of human can co-exist with Artificial General Intelligence and Superintelligence. We need to have proper ways to develop future living AIs as partners for human. We not only need principles for AI to interact with human, we but also need principles for human to interact with AI. For more information, please visit http://harmonious-ai.org/

Responsible AI to Promote World Peace and Sustainable Development

The United Nations Office for Disarmament Affairs (UNODA) and the European Commission co-hosted a workshop on "Ethics and Emerging Technologies in Weapons Systems" in April 2022. The director of Center for Long-term AI, Prof. Yi Zeng was invited as a speaker. The following is a recording of his speech.   Responsible AI to promote World Peace and Sustainable Development (long-term-ai.center) 

Principles on Military Artificial Intelligence [for Comment]

 Military Artificial Intelligence (AI) has already brought great risks and challenges to the world, and we should be vigilant against the lowering of the threshold of war due to the development of military AI, and reduce avoidable disasters. "Defense Artificial Intelligence and Arms Control Network" advocates that the design, research, development, use and deployment of military AI throughout the whole life cycle should comply with the following principles   Principles on Military Artificial Intelligence | Defense AI and Arms Control Network (defense-ai-and-arms-control.network) 

Defense AI and Arms Control: Promoting AI for Peace.

Defense AI and Arms Control Network is a network for enabling expert discussions and idea exchanges in the field of Defense AI and Arms Control, Military AI Ethics, Governance and Global Cooperation, and AI for Peace. It also provides online services to synthesize global discussions on these topics from complementary perspectives, and provide a global view to promote AI for peace. For more information, please visit http://www.defense-ai-and-arms-control.network/

Copyright © 2022-2025 Center for Long-term Artificial Intelligence - All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience.

DeclineAccept