Anthropic co-founder Jack Clark’s 2028 warning is about a specific threshold in AI development: the point where the work of creating frontier AI systems becomes automated. Clark wrote in Import AI that there is a “likely chance (60%+)” that “no-human-involved AI R&D” could happen by the end of 2028, defining that as an AI system powerful enough to plausibly build its own successor [7].
That matters because the concern is not simply that AI tools will write more code. It is that the research loop behind increasingly capable AI systems could itself become automated, reducing the role of human researchers in creating the next generation of models [4][
7].
What Jack Clark predicted
Clark’s forecast has two important parts.
First, he estimates a 60%+ chance that AI R&D could become “no-human-involved” by the end of 2028 [7]. Second, the threshold he is watching is not ordinary AI assistance, but a system capable enough to “plausibly autonomously build its own successor” [
7].
Coverage of Clark’s warning has described this as a move toward end-to-end automation of frontier-model research and development . Another report summarized the prediction as a 60%+ chance that an AI model could fully train its successor by the end of 2028 .




