The AI industry has a new buzzword: "PhD-level AI." According to a report from The Information, OpenAI may be planning to launch several specialized AI "agent" products, including a $20,000 monthly tier focused on supporting "PhD-level research." Other reportedly planned agents include a "high-income knowledge worker" assistant at $2,000 monthly and a software developer agent at $10,000 monthly.
OpenAI has not yet confirmed these prices, but they have mentioned PhD-level AI capabilities before. So what exactly constitutes "PhD-level AI"? The term refers to models that supposedly perform tasks requiring doctoral-level expertise. These include agents conducting advanced research, writing and debugging complex code without human intervention, and analyzing large datasets to generate comprehensive reports. The key claim is that these models can tackle problems that typically require years of specialized academic training.
Companies like OpenAI base their "PhD-level" claims on performance in specific benchmark tests. For example, OpenAI's o1 series models reportedly performed well in science, coding, and math tests, with results similar to human PhD students on challenging tasks. The company's Deep Research tool, which can generate research papers with citations, scored 26.6 percent on "Humanity's Last Exam," a comprehensive evaluation covering over 3,000 questions across more than 100 subjects.
OpenAI's latest advancement along these lines comes from its o3 and o3-mini models, announced in December. These models build upon the o1 family launched earlier last year. Like o1, the o3 models use what OpenAI calls "private chain of thought," a simulated reasoning technique where the model runs through an internal dialog and iteratively works through issues before presenting a final answer.
This approach ostensibly mirrors how human researchers spend time thinking about complex problems rather than providing immediate answers. According to OpenAI, the more time you put into this inference-time compute, the better answers you get. So here's the key point: For $20,000, a customer would presumably be buying tons of thinking time for the AI model to work on difficult problems.
According to OpenAI, o3 earned a record-breaking score on the ARC-AGI visual reasoning benchmark, reaching 87.5 percent in high-compute testing—comparable to human performance at an 85 percent threshold. The model also scored 96.7 percent on the 2024 American Invitational Mathematics Exam, missing just one question, and reached 87.7 percent on GPQA Diamond, which contains graduate-level biology, physics, and chemistry questions.
More than anything else, it's a test of perseverance and determination (usually driven by some form of spite). It is not a measure of knowledge.
No large conglomeration of matrix multiplications has yet done anything like this. I know this because they would have trumpted it to the heavens if an LLM came even close.