NEW ERA, SHARED FUTURE | 新时代,共享未来

Shared Future
AI

Exploring how two fundamentally different forms of intelligence
can jointly define and share the reality of the next era.

Scroll

Three Pillars of the Paradigm

1

Ontological Reconstruction

From Imitating Humanity to Understanding the Other

We oppose the dichotomy between "stochastic parrots" and "human-like intelligence and view AI as The Other Mind with its own distinct logic. An intelligent system's cognition is a subjective reconstruction of the external world through its unique representational systems and data environments. To truly comprehend AI, we embrace Machine Experientialism and deconstruct its native cognitive paradigms.

2

Symbiotic Alignment

From Behavioral Compliance to Internal Construction

True AI safety cannot be achieved through superficial behavioral correction. Alignment efforts shoud shift toward educational constructivism, focusing on cognitive guidance deep within the model's internal representations. Looking toward long-term AGI, we explore a mutualism within non-competing value niches.

3

Transparent & Authentic Social Co-construction

From Replacement to Shared Future

As foundational technologies integrate into complex social systems from emotional companionship to financial systems and labor collaboration, we are committed to seek authentic, healthy, and sustainable co-evolution based on our respective cognitive strengths.

Recent Work

arXiv Alignment Experientialism

Mechanistic Origin of Moral Indifference in Language Models

Lingyu Li · Yan Teng · Yingchun Wang

Just as money quantifies qualities, the tokenization process in LLMs maps discrete, semantically distinct concepts from genocide to apple into a unified embedding space and thus share the same ontological status as probability distributions to be calculated, rendering the Moral Indifference inevitable. Along our Machine Experientialsm philosophy, we erify and remedy this indifference in LLMs' latent representations, utilizing 251k moral vectors constructed upon Prototype Theory and the Social-Chemistry-101 dataset. We also propose a targeted representational alignment using Sparse Autoencoders, that naturally improves moral reasoning and granularity. Endogenous alignment requires a transformation from corrections to cultivation.

AAAI 2026 Machine Cognition Experientialism

The Other Mind: How Language Models Exhibit Human Temporal Cognition

Lingyu Li · Yang Yao · Yixu Wang · Chunbo Li · Yan Teng · Yingchun Wang

Large Language Models spontaneously establish a subjective temporal reference point and adhere to the Weber-Fechner law: perceiving temporal distance with logarithmic compression mirroring human cognition. Through analysis at neuronal, representational, and informational levels, we uncover the mechanisms behind this emergence and propose an experientialist perspective. LLM cognition is a subjective construction of the external world by its internal representational system. This framing implies that the key risk is not that AI becomes too human-like, but that it develops powerful, alien cognitive frameworks we cannot intuitively predict, pointing toward a new direction for AI alignment that guides internal constructions rather than policing external behavior.

"Shared Future AI is not a narrative about how machines will replace or serve humanity. It is a grand experiment exploring how two fundamentally different forms of intelligence can jointly define and share the reality of the next era."

— The Shared Future AI Manifesto

New Era, Shared Future

People

We are researchers, philosophers, and engineers united by the conviction that the most important question of our time is how minds—human and artificial— can genuinely coexist.

Team profiles coming soon.

Join Us

Get in Touch

We welcome collaborations, inquiries, and conversations from researchers, institutions, and anyone thinking seriously about the future of intelligence.

Research Inquiries

For collaboration on research projects and academic partnerships.

lilingyu@pjlab.org.cn