Skip to main content
Introducing Vibes background graphic
Vibes collage
LATEST RELEASE

Introducing Vibes, immersive videos created with AI

Introducing Vibes, immersive videos created with AI

LATEST RELEASE

Segment Anything 3

With SAM 3, you can use text and visual prompts to precisely detect, segment and track any object in an image or video.
META AI

See what you can do with Meta AI

Explore features
'Add people' UI in Meta AI app

Meta AI app

Create vibes—expressive AI-generated videos.
Add yourself and friends, and bring your ideas to life.
Learn more
AI Studio graphic
AI Studio
With Meta AI Studio, anyone can create, discover, and interact with different AIs for exploring their interests, learning new skills and having fun.

Talk to your personal AI

Learn more
BUILD WITH LLAMA 4

Explore our latest large language model

Download models
HOW WE INNOVATE

We innovate in the open for everyone

Research

Self-supervised learning for vision at unprecedented scale

Explore DINOv3
RESEARCH PROJECTS
RESEARCH AREAS

We advance AI capabilities in expressive communication, social interaction and use of language. Through foundational research in natural language processing and multimodal AI, we develop systems that enable more natural, meaningful interactions between humans and machines.

We advance the fundamental capabilities needed for AI to understand and act within the physical and digital world. Through our research, we hope to unlock a wide variety of future agents that help humans do more throughout all aspects of their lives. From robots that can move around, interact with objects, to help accomplish household tasks, to wearable glasses that understand the real and digital world and support people throughout their daily tasks.

Our research focuses on aligning models and decisions with human intent and societal interests through deeper fundamental understanding and enhanced steerability and efficiency of AI models. The pillar is at the forefront of research on AI for science and AI for society.

We conduct fundamental research in pre-training methods and new architectural paradigms that enable foundation models to learn and reason with agility and efficiency across novel downstream challenges. Our work expands the frontier of approaches such as world models, non-autoregressive architectures, and memory-augmented models to unlock new capabilities in adaptive intelligence.

We develop code world models as foundational models for code and agents, and advance methods to do reinforcement learning with execution feedback. We research how to do much more efficient architectures for code world models, latent space reasoning, and grounded reasoning and planning with world models. We develop various agents, e.g. AI research agents to help our own research, and upstream our agents’ needs to our foundational models.

The north star goal of our Perception research teams is to enable general AI systems to perceive the visual world to inform action, communication, and generation. To achieve this goal, we're developing next generation perception models capable of understanding images and videos not as pixels, but as a capture of visual entities like people, objects, activities, their spatial and temporal relationships.

OUR VISION

Personal superintelligence for everyone

About

Personal Superintelligence

"...Meta's vision is to bring personal superintelligence to everyone. We believe in putting this power in people's hands to direct it towards what they value in their own lives.

This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output. At Meta, we believe that people pursuing their individual aspirations is how we have always made progress expanding prosperity, science, health, and culture. This will be increasingly important in the future as well."
Read full statement

We’re advancing AI for a more connected world.

Pushing the boundaries of AI through research, infrastructure and product innovation.
Learn more