← Back to Dashboard

ARC Prize Foundation

Winter 2026 New

AI benchmarks that measure general intelligence and inspire new ideas

🌐 arcprize.org 📍 Remote 👥 4 people
Industrials

ARC Prize builds AI benchmarks that measure general intelligence. Our benchmark, ARC-AGI, has been used by OpenAI, Anthropic, Google DeepMind, and xAI. Founded by Mike Knoop and Francois Chollet, we inspire open source artificial general intelligence (AGI) research through benchmarks (the ARC-AGI series), global competitions, research grants, community, and content, we exist to guide researchers, industry, and regulators on the path to AGI. We believe that AGI requires more than just scaling up existing AI models. It demands a fundamental shift towards systems capable of genuine fluid intelligence, the ability to adapt to novel challenges and solve problems efficiently, much like humans do.

AI Investor Summary

ARC Prize Foundation is building AI benchmarks that measure general intelligence, with their ARC-AGI series already adopted by leading AI labs like OpenAI and Google DeepMind. Led by a strong founder with deep AI research and operational experience, the company is well-positioned to capitalize on the massive and growing market for AI benchmarking and AGI research guidance.

Key Highlights

  • Benchmark adopted by all major AI labs (OpenAI, Anthropic, Google DeepMind, xAI).
  • Founder Greg Kamradt has exceptional technical and operational credentials, including DeepMind/Google research experience and a large developer community (Leverage).
  • Addresses a critical and rapidly growing market need for AGI measurement.

Risk Factors

  • As a foundation, the path to sustainable funding and scaling beyond grants needs to be clear.
  • The 'platform' aspect beyond the benchmark needs further definition and development.
  • Reliance on the continued relevance and evolution of the benchmark as AI research progresses.

Founders

G
Greg Kamradt Founder/President
LinkedIn

Greg is the president of ARC Prize Foundation. He leads the nonprofit's work building benchmarks that have been used by OpenAI, Anthropic, Google, and xAI. Previously, Greg led ops + product at Digits (AI Accounting), and spent several years at Salesforce driving strategy, growth, and predictive analytics. Greg is also the founder of Leverage, teaching 100K+ developers how to build their first AI applications through his YouTube channel (66K subs).

Previous: Google, DeepMind
Education: University of Pennsylvania, University of Pennsylvania

Score Breakdown

Team 9/10

Greg Kamradt has a very strong background, with a Ph.D. in Computational Neuroscience from UPenn, experience as a Research Scientist at Google and DeepMind, and significant operational and product experience at Digits and Salesforce. His success with Leverage, teaching over 100K developers AI, demonstrates strong communication and community-building skills. The mention of Francois Chollet (creator of ARC-AGI) as a founder adds significant technical credibility, though his specific role and involvement in the Foundation are not detailed. The team size of 4 is small, suggesting potential scaling challenges, but the core expertise is exceptional. [Boost +1: Founder from Google]

Market 9.5/10

The market for AI benchmarking, particularly for general intelligence (AGI), is massive and rapidly growing. As AI capabilities advance, the need for robust, standardized measures becomes critical for research, development, and regulation. The timing is excellent, as the industry is grappling with the definition and measurement of AGI. Regulatory interest in AI safety and capabilities further tailwinds this market. Competition exists, but ARC-AGI's adoption by major AI labs suggests a strong position. [Boost +0.5: Hot sector: ai]

Product 7/10

The ARC-AGI benchmark itself is technically differentiated and has gained significant traction with top AI labs, indicating its value and robustness. Its defensibility lies in its established reputation and the community it has fostered. The platform potential is high, as it can evolve into a comprehensive ecosystem for AGI research. However, the description focuses heavily on the benchmark itself, and the 'platform' aspect beyond the benchmark needs further clarification. UX quality is not directly assessed but is assumed to be functional for researchers.

Traction 6/10

The primary traction is the adoption of ARC-AGI by major AI players like OpenAI, Anthropic, Google DeepMind, and xAI, which is a significant validation. The mention of a $2M prize for ARC-AGI-3 also indicates funding and community engagement. However, specific metrics on active users of the benchmark beyond these labs, revenue (as it's a foundation, revenue might be grant-based or donation-based), or growth rate of the community are not detailed. The 'ARC Prize Verified' announcement is positive but lacks specifics on its impact. The job posting for a Platform Engineer suggests a move towards scaling infrastructure, which is a good sign.

Last analyzed 5/8/2026

News

AGI Is Not a Compute Problem. ARC-AGI-3 Just Proved It. | by Siddhant Nitin Patil | Mar, 2026 | Towards AI

The ARC-AGI-3 benchmark, where frontier AI models scored below 1% and humans scored 100%, demonstrates that the gap in general intelligence is not about compute power but architectural and motivational differences, particularly the lack of intrinsic curiosity in AI.

pub.towardsai.net neutral Impact: 9/10
Announcing ARC Prize Verified | ARC Prize

The ARC Prize Foundation introduced the ARC Prize Verified program to enhance the rigor of evaluating AI systems on the ARC-AGI benchmark, including third-party academic audits and partnerships with leading AI labs.

arcprize.org positive Impact: 8/10
ARC Prize Foundation: AI benchmarks that measure general intelligence and inspire new ideas

ARC Prize Foundation, a Y Combinator Winter 2026 startup, develops AI benchmarks like ARC-AGI to measure general intelligence and foster open-source AGI research, with their benchmarks used by major AI labs.

ycombinator.com positive Impact: 7/10
ARC-AGI-3 Offers $2M for AI Matching Human Reasoning

The ARC Prize Foundation's new ARC-AGI-3 benchmark, which tests AI on interactive reasoning tasks without instructions, shows frontier models scoring below 1% compared to humans' 100%, with a $2 million prize for AI that matches human reasoning.

local.newsbreak.com neutral Impact: 8/10
ARC Prize Foundation Seeks Platform Engineer to Scale AI Benchmark Infrastructure

The ARC Prize Foundation is hiring a senior platform engineer to manage and enhance the technical infrastructure for their ARC-AGI benchmark series, which evaluates general AI intelligence.

news.lavx.hu neutral Impact: 6/10
ARC-AGI-3 Offers $2M for AI Matching Human Reasoning

The ARC Prize Foundation launched ARC-AGI-3, an interactive benchmark with over $2 million in prizes, where frontier AI models scored below 1% while humans achieve 100%, challenging claims of achieved artificial general intelligence.

WinBuzzer.com positive Impact: 9/10
ARC Prize Presentation on ARC AGI 3

The ARC Prize Foundation presented ARC AGI 3, a new benchmark focused on interactive, self-directed learning in novel environments, with current AI performance below 1% and a $2 million competition announced.

Medium (evoailabs) positive Impact: 8/10
ARC-AGI-3 Offers $2M for AI Matching Human Reasoning

The ARC Prize Foundation launched ARC-AGI-3, a $2 million competition with an interactive benchmark where current frontier AI models score below 1%, highlighting a significant gap in genuine reasoning abilities compared to humans who score 100%.

NewsBreak (Winbuzzer.com) neutral Impact: 8/10
ARC-AGI-3 Launches - AI Agents Must Learn, Not Memorize

The ARC Prize Foundation launched ARC-AGI-3, an interactive AI benchmark where agents must explore environments without instructions, with current frontier AI models scoring under 1% compared to humans' 100%.

Awesome Agents neutral Impact: 8/10
Announcing ARC-AGI-3

The ARC Prize Foundation announced the release of ARC-AGI-3, a new interactive benchmark with over $2 million in prizes, designed to measure agentic intelligence by testing AI's ability to explore, learn, and adapt in novel environments without instructions.

ARC Prize positive Impact: 9/10
Measuring Human Performance on ARC-AGI-3

The ARC Prize Foundation has released a comprehensive human dataset for ARC-AGI-3, detailing human performance on interactive reasoning environments to establish baselines for measuring AI's ability to learn like humans.

ARC Prize neutral Impact: 7/10
ARC Prize 2026 Launches $2M Competition as Frontier AI Agents Score Below 1% on New Benchmark

The ARC Prize 2026 has launched its third-generation ARC-AGI-3 benchmark, a $2 million competition testing AI agents on interactive puzzle environments, with current frontier AI models scoring below 1% while humans achieve 100%.

AI Haven neutral Impact: 8/10
Overall Score
8
out of 10
Team
Market
Traction
Product
Team (35%) 9
Market (25%) 9.5
Product (25%) 7
Traction (15%) 6

Quick Info

Batch
Winter 2026
Team Size
4
Location
Remote
Founders
1
Scraped
4/10/2026
View on YC →