EGUsphere - An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment
Preprints
Abstract
Discussion
Metrics
Preprints
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.
Preprints
Abstract
Discussion
Metrics
20 Dec 2024
20 Dec 2024
An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment
John Patrick Dunne
Helene T. Hewitt
Julie Arblaster
Frédéric Bonou
Olivier Boucher
Tereza Cavazos
Paul J. Durack
Birgit Hassler
Martin Juckes
Tomoki Miyakawa
Matthew Mizielinski
Vaishali Naik
Zebedee Nicholls
Eleanor O’Rourke
Robert Pincus
Benjamin M. Sanderson
Isla R. Simpson
and
Karl E. Taylor
Abstract.
The vision for the Coupled Model Intercomparison Project (CMIP) is to coordinate community based efforts to answer key and timely climate science questions and facilitate delivery of relevant multi-model simulations through shared infrastructure for the benefit of the physical understanding, vulnerability, impacts and adaptations analysis, national and international climate assessments, and society at large. From its origins as a punctuated phasing of climate model intercomparison and evaluation, CMIP is now evolving through coordinated and federated planning into a more continuous climate modelling programme. The activity is supported by the design of experimental protocols, an infrastructure that supports data publication and access, and the phased delivery or “fast track” of climate information for national and international climate assessments informing decision making. Key to these CMIP7 efforts are: an expansion of the Diagnostic, Evaluation and Characterization of Klima (DECK) to include historical, effective radiative forcing, and focus on CO
-emissions-driven experiments; sustained support for community MIPs; periodic updating of historical forcings and diagnostics requests; and a collection of experiments drawn from community MIPs to support research towards the 7th Intergovernmental Panel on Climate Change Assessment Reporting cycle, or “AR7 Fast Track”, and climate services goals across prediction and projection, characterization, attribution and process understanding.
How to cite.
Dunne, J. P., Hewitt, H. T., Arblaster, J., Bonou, F., Boucher, O., Cavazos, T., Durack, P. J., Hassler, B., Juckes, M., Miyakawa, T., Mizielinski, M., Naik, V., Nicholls, Z., O’Rourke, E., Pincus, R., Sanderson, B. M., Simpson, I. R., and Taylor, K. E.: An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment, EGUsphere [preprint], https://doi.org/10.5194/egusphere-2024-3874, 2024.
Received: 09 Dec 2024
Discussion started: 20 Dec 2024
Publisher's note
: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.
Download & links
Preprint (PDF, 1328 KB)
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are
encouraged
to refer to the final revised version.
Preprint
(1328 KB)
Download & links
The requested preprint has a corresponding peer-reviewed final revised paper. You are
encouraged
to refer to the final revised version.
Preprint
(1328 KB)
Metadata XML
BibTeX
EndNote
Final revised paper
Share
Journal article(s) based on this preprint
01 Oct 2025
| Highlight paper
An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment
John P. Dunne, Helene T. Hewitt, Julie M. Arblaster, Frédéric Bonou, Olivier Boucher, Tereza Cavazos, Beth Dingley, Paul J. Durack, Birgit Hassler, Martin Juckes, Tomoki Miyakawa, Matt Mizielinski, Vaishali Naik, Zebedee Nicholls, Eleanor O'Rourke, Robert Pincus, Benjamin M. Sanderson, Isla R. Simpson, and Karl E. Taylor
Geosci. Model Dev., 18, 6671–6700,
2025
Short summary
Editorial statement
Short summary
The seventh phase of the Coupled Model Intercomparison Project (CMIP7) coordinates efforts to answer key and timely climate science questions and facilitate delivery of relevant multi-model simulations for prediction and projection; characterization, attribution, and process understanding; and vulnerability, impact, and adaptation analysis. Key to the CMIP7 design are the mandatory Diagnostic, Evaluation and Characterization of Klima and optional Assessment Fast Track experiments.
Hide
Editorial statement
The Coupled Model Intercomparison Project lies at the core of global climate prediction. This paper details the Coupled Model Intercomparison Project phase 7 (CMIP7) and its Fast Track initiative. By transitioning into a continuous climate modeling program with enhanced coordination and federated planning, CMIP7 aims to address key climate questions more effectively. The expansion of the Diagnostic, Evaluation, and Characterization of Klima (DECK) experiments—including the addition of historical simulations, effective radiative forcing assessments, and CO₂-emissions-driven experiments—strengthens the foundation for climate model evaluation and projection. Additionally, the AR7 Fast Track ensures timely delivery of critical climate simulation data to support the upcoming 7th Intergovernmental Panel on Climate Change Assessment Report. This paper highlights how these advancements in experimental protocols and infrastructure support not only scientific understanding but also inform policy-making and climate services, ultimately contributing to global efforts in climate adaptation and mitigation.
Hide
John Patrick Dunne
Helene T. Hewitt
Julie Arblaster
Frédéric Bonou
Olivier Boucher
Tereza Cavazos
Paul J. Durack
Birgit Hassler
Martin Juckes
Tomoki Miyakawa
Matthew Mizielinski
Vaishali Naik
Zebedee Nicholls
Eleanor O’Rourke
Robert Pincus
Benjamin M. Sanderson
Isla R. Simpson
and
Karl E. Taylor
Interactive discussion
Status
: closed
Comment types
AC
– author |
RC
– referee |
CC
– community |
EC
– editor |
CEC
– chief editor

: Report abuse
RC1
'Comment on egusphere-2024-3874'
, Anonymous Referee #1, 22 Jan 2025
CMIP has been a cornerstone of international Earth system modelling for the past 3 decades, delivering key science support to IPCC Assessments, while advancing the development of Earth system and climate models and their use to understand past and future evolution of the climate system. CMIP has been grappling with the dual demands of delivering science support (mainly future projections) to international climate change assessments, and the growing climate service sector, while also coordinating research-led experiments to advance Earth system models and scientific understanding. This dual set of demands has caused CMIP to grow significantly over its past two iterations (CMIP5 and CMIP6) in terms of MIPs, experiments to be run, and data to be archived, with consequences for contributing modelling groups. All of this has (and is) being done through short-term, uncoordinated (at an international level) funding, supporting the development of forcing data, realization of experiments, and maintenance of the underpinning infrastructure. Such a situation is difficult to maintain, something had to change in the organization of CMIP going forwards. CMIP7, as described in this paper, is a step towards such a change, with a first attempt to separate simulations intended to support international assessments (e.g. IPCC AR7 and the CMIP7 Fast Track) from other experiments intended to advance the science and modelling of the climate system (e.g. CMIP7 community MIPs). This paper is therefore timely and important. From this perspective the paper clearly needs to be published, though not in its present form. Below I outline a number of points that need addressing before the paper is suitable for publication. I hope this will increase its value for the research community and for CMIP more generally.
Major points.
The paper is very wordy, with lots of long sentences and lists justifying why things have been, or will be, done in a certain way. This makes the paper tedious to read. Addressing this could reduce the length of the paper (easily) by 25% and make it a more enjoyable read! As an example, lines 73 to 122 could be reduced to ~10 lines and still deliver the key messages. Section 3.5 adds very little. While the CMIP IPO is a very good development and is doing a great job supporting the development of CMIP7, I am not convinced much of section 4.1 is really needed in a paper. Section 4.2 is also very wordy and rambling. This is true for a lot of the introduction, which could be reduced in length without losing much.
There are quite a few examples of repetition. e.g. lines 84-86, lines 93-94, lines135 to 138, 144-145, 440-445. This needs to be reduced throughout the manuscript.
There are also numerous examples of sentences beginning with long justifications for what is to come based on what has already been said: e.g. Line 93:
“In addition to the systematic characterization of climate mechanisms….”
or line 110:
“Beyond direct contribution to national and international climate assessments…
and lots of similar examples. I don’t think these are needed and can be deleted in lots of places.
The paper has lots of examples explaining how CMIP has (and will) be supported by, aligned with, and deliver to, WCRP. While CMIP is a WCRP-sponsored activity and this is important, it is likely sufficient to say this once (most people know this already) and not have numerous motivations and links to WCRP listed. I suggest reducing these (examples include lines 60-63, 110-120, and others)
The 4 research questions are all interesting, and important, What the paper lacks is a clear link between these research questions and the experiments proposed (either as part of the Fast Track or within the community MIPs). Will there be new experiments designed to specifically address some of the research questions? How will the existing experiments advance understanding? In some cases this is clear (e.g. CO2-emission driven models will likely expose (and lead to improvement in) carbon-cycle biases and feedbacks more thoroughly than concentration-driven models) but in many instances it isn’t. The connection between the guiding research themes and the experiments planned in CMIP7 needs to be better explained.
In two places (line 180 and line 645) there is an assertion high ECS models in CMIP6 have been proven to be incorrect and by implication these models are worse than lower ECS models, or just wrong. I don’t agree with this assertion. A high ECS Earth (>5K) is very unlikely, but it has not been conclusively ruled out. If anything, recent increased warming and suggestions of a possible role for changing cloud-radiation processes in this increased warming, may increase the likelihood of a high ECS world. In addition, some of the CMIP6 models with high (increased relative to CMIP5) ECS have been shown to realize this because of improvements in specific cloud feedback processes that were previously (erroneously) balancing other incorrect feedback magnitudes leading to a lower ECS through compensating errors. With removal (improvement) in one aspect of this compensation, ECS has increased. While the higher ECS
may
not be correct, the underlying processes/feedbacks are likely simulated better. To me this is a model improvement. It would be a pity if CMIP7 discouraged groups from making such important model improvements, even if that risked increasing their model ECS value. I suggest modifying these two assertions.
The general aspiration for CMIP7 to separate out policy-relevant simulations (e.g. Fast Track for IPCC AR7) and longer-term MIPs aimed at specific research questions, is a good one. The paper could do a better job explaining and motivating this separation, including how modelling groups could best contribute to either or both parts of CMIP7.
Table 3 is very long and poorly explained. Could it be presented in a more engaging manner? If the main explanations for the different experiments are in the references listed in the table, please let the reader know that. Also, I think there may be some errors in the table. e.g. (i) are piClim-histaer and piClim-histall 30y AMIP or 172y AMIP runs? (ii) For piClim-X and SSPXSST-SLCF I don’t see how feedbacks can be assessed (as suggested in the table) if the models are run in prescribed-SST mode. At least the classical definition of a feedback modifying the SST-response to a given forcing and thus also modifying the forcing itself, cannot be realised in prescribed-SST mode. (iii) for piClimSLCF it is unclear what happens to the non-SLCF emissions. Are these held at PI values? A bit better explanation of this table would help the reader.
Minor Points
On the “guiding research questions” I don’t understand why these are “ephemeral” (line 155).
Regarding the Fast Track experiments, it is not clear if groups are recommended to do everything in either emission-mode or concentration-mode. For example, are there plans for DAMIP to support both emission-driven and concentration-driven experiments? This is not made clear in the explanation of table 3.
Line 494-495: How will DAMIP support analysis of individual forcings in the context of an interactive carbon cycle? Will DAMIP run a coordinated set of experiments for emission-driven ESMs?
Line 128 talks about the lack of infrastructure for a sustained approach. This is also true with respect to funding of modelling groups to realize such regular simulations. This should also be highlighted.
Lines 221-223 on high resolutions models contradicts itself. Please make clearer what you mean here.
In section 2.3 I am surprised that emission-driven ESM (scenarioMIP) projections are not discussed more. This seems an important development on CMIP6.
Lines 266 to 267:
while modelling groups suggest that increase in fire over this century (Allen et al. 2024)
seems to be an incomplete sentence.
For section 2.4 more discussion on potential MIP contributions to addressing this seems appropriate (e.g. TIPMIP, CDRMIP, C4MIP). I am also surprised there isn’t more mention of global warming overshoot scenarios in this section.
Line 350: coupled carbon-climate ESMs importance in climate stabilization is mentioned. The importance in negative emission scenarios (warming overshoot) is likely even more important to mention.
Lines 386-388: Will there be a coordinated effort to compare CMIP6 historical and scenario forcings to those in CMIP7? This would be a good thing to do (e.g. a forcing comparison MIP).
Section 3.2. Will there be any stability/conservation requirements to meet for the piControl or esm_piControl runs?
Lines 421 to 425: I don’t understand what is being proposed here. Please make it clearer.
If model X is used in a given science MIP, is it still an entry-card that model-X also runs the CMIP7 DECK? This is not clear.
Line 628: The REF is mentioned and somewhere else this is defined as a Rapid Evaluation Framework. What the REF is, and what it is intended for, needs to be more clearly explained.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-RC1
AC2
'Reply on RC1'
, John Dunne, 12 Apr 2025
The authors deeply appreciate the reviewer's careful attention to the previous version of the manuscript and have incorporated there suggested changes throughout.
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
RC1: Anonymous:
CMIP has been a corn
erstone of international Earth system modelling for the past 3 decades, delivering key science support to IPCC Assessments, while advancing the development of Earth system and climate models and their use to understand past and future evolution of the clim
ate system. CMIP has been grappling with the dual demands of delivering science support (mainly future projections) to international climate change assessments, and the growing climate service sector, while also coordinating research-led experiments to adv
ance Earth system models and scientific understanding. This dual set of demands has caused CMIP to grow significantly over its past two iterations (CMIP5 and CMIP6) in terms of MIPs, experiments to be run, and data to be archived, with consequences for con
tributing modelling groups. All of this has (and is) being done through short-term, uncoordinated (at an international level) funding, supporting the development of forcing data, realization of experiments, and maintenance of the underpinning infrastructur
e. Such a situation is difficult to maintain, something had to change in the organization of CMIP going forwards. CMIP7, as described in this paper, is a step towards such a change, with a first attempt to separate simulations intended to support internati
onal assessments (e.g. IPCC AR7 and the CMIP7 Fast Track) from other experiments intended to advance the science and modelling of the climate system (e.g. CMIP7 community MIPs). This paper is therefore timely and important. From this perspective the paper
clearly needs to be published, though not in its present form. Below I outline a number of points that need addressing before the paper is suitable for publication. I hope this will increase its value for the research community and for CMIP more generally.
Major points.
The paper is very wordy, with lots of long sentences and lists justifying why things have been, or will be, done
in a certain way. This makes the paper tedious to read. Addressing this could reduce the length of the paper (easily) by 25% and make it a more enjoyable read! As an example, lines 73 to 122 could be reduced to ~10 lines and still deliver the key
messages
Thanks for your comment. We have substantially edited the manuscript with this in mind.
Section 3.5 adds very little.
Agreed. We have deleted this
section and refer to Appendix 3.
While the CMIP IPO is a very good development and is doing a great job supporting the development of CMIP7, I am not convinced much of section 4.1 is really needed in a paper.
We agree that the section was originally too long and have shortened it in the revision and added explicit mention to the task teams, we disagree on its outright removal.  T
he authors feel that the role of the IPO and associated task teams is critical to publicly acknowledge as part of the formal scientific record in response to WMO Resolution 67 and community surveys requesting increases in support for community engagement.
Section 4.2 is also very wordy and rambling. This is true for a lot of the introduction, which could be reduced in length without losing much.
Agreed. The text has been reduced and tightened throughout the paper.
There are quite a few examples of repetition. e.g. lines 84-86, lines 93-94, lines135 to 138, 144-145, 440-445. This needs to be reduced throughout the manuscript.
Agreed. We have
reduced
text in the suggested lines.
There are also numerous examples of sentences beginning with long justifications for what is to come based on what has already been said: e.g. Line 93: “In ad
dition to the systematic characterization of climate mechanisms….” or line 110: “Beyond direct contribution to national and international climate assessments…and lots of similar examples. I don’t think these are needed and can be deleted in lots of
places
Agreed.  We have removed these sentences and reduced and revised the text substantially.
The paper has lots of examples explaining how CMIP has (and will) be supported by, aligned with, and deliver to, WCRP. While CMIP is a
WCRP-sponsored activity and this is important, it is likely sufficient to say this once (most people know this already) and not have numerous motivations and links to WCRP listed. I suggest reducing these (examples include lines 60-63, 110-120, and
others
We have reduced these throughout the paper.
The 4 research questions are all interesting, and important, What the paper lacks is a clear link between these research questions and the experiments proposed (either as part of the Fast Track or within the
community MIPs). Will there be new experiments designed to specifically address some of the research questions? How will the existing experiments advance understanding? In some cases this is clear (e.g. CO2-emission driven models will likely expose (and l
ead to improvement in) carbon-cycle biases and feedbacks more thoroughly than concentration-driven models) but in many instances it isn’t. The connection between the guiding research themes and the experiments planned in CMIP7 needs to be better explained.
We have
added some context for the science questions, including explaining their provenance and connection to e.g.
WCRP 2019-2028 Strategic Plan Science Objectives (
. We explain in particular that the questions are an assessment of timely opportunities rather than a constraint on the research agenda.  We also explicitly refer readers to section 3.4.5 where the connections between questions and experiments
are made.
In two places (line 180 and line 645) there is an assertion high ECS models in CMIP6 have been proven to be incorrect and by implication these models are worse than lower ECS models, or just wrong. I don’t agree wit
h this assertion. A high ECS Earth (>5K) is very unlikely, but it has not been conclusively ruled out. If anything, recent increased warming and suggestions of a possible role for changing cloud-radiation processes in this increased warming, may increase t
he likelihood of a high ECS world. In addition, some of the CMIP6 models with high (increased relative to CMIP5) ECS have been shown to realize this because of improvements in specific cloud feedback processes that were previously (erroneously) balancing o
ther incorrect feedback magnitudes leading to a lower ECS through compensating errors. With removal (improvement) in one aspect of this compensation, ECS has increased. While the higher ECS may not be correct, the underlying processes/feedbacks are likely
simulated better. To me this is a model improvement. It would be a pity if CMIP7 discouraged groups from making such important model improvements, even if that risked increasing their model ECS value. I suggest modifying these two assertions.
The discussion
around high ECS has been removed.
We also
highlight that the CMIP7 effort to provide the Rapid Evaluation Framework allows for a better assessment of different aspects of model performance and simulation for different potential end users and applications
The general aspiration for CMIP7 to separate out policy-relevant simulations (e.g. Fast
Track for IPCC AR7) and longer-term MIPs aimed at specific research questions, is a good one. The paper could do a better job explaining and motivating this separation, including how modelling groups could best contribute to either or both parts of CMIP7.
We appreciate the support of the reviewer for this concept and have tried to make the experimental design and connection to the research questions more clear.
Table 3 is very long and p
oorly explained. Could it be presented in a more engaging manner? If the main explanations for the different experiments are in the references listed in the table, please let the reader know that. Also, I think there may be some errors in the table. e.g. (
) are
piClim-histaer
and
piClim-histall
30y AMIP or 172y AMIP runs? (ii) For
piClim
-X and SSPXSST-SLCF I don’t
see how feedbacks can be assessed (as suggested in the table) if the models are run in prescribed-SST mode. At least the classical definition of a feedback modifying the SST-response to a given forcing and thus also modifying the forcing itself, cannot be
realised
in prescribed-SST mode. (iii) for
piClimSLCF
it is unclear what happens to the non-SLCF emissions. Are these held at PI values? A bit better explanation of this table would help the reader.
We have taken your comments on board and tried to streamline and clarify the information in the table and corrected for errors
and inconsistencies
Minor Points
On the “guiding research questions” I don’t understand why these are “ephemeral” (line 155).
We have
made explicit why the questions are ephemeral: “
These questions are more focused on the capabilities of current ESMs
- and hence more ephemeral and timely -
than those posed for CMIP6
...”
Regarding the Fast Track experiments, it is not cl
ear if groups are recommended to do everything in either emission-mode or concentration-mode. For example, are there plans for DAMIP to support both emission-driven and concentration-driven experiments? This is not made clear in the explanation of table 3.
The following clarification has been added to Table 3:
The esm- prefix indicates experiments are forced by CO2 emissions rather than CO2 concentrations.
Line 494-495: How will DAMIP support analysis of individual
forcings
in the context of an interactive carbon cycle? Will DAMIP run a coordinated set of experiments for emission-driven ESMs?
Only the FastTrack DAMIP experiments (concentration driven) are discussed here. We now refer to the DAMIP v2.0 paper (Gillett et al 2025
) where
additional DAMIP experiments are discussed.
Line 128 talks about the lack of infrastructure for a sustained approach. This is also true with respect to funding of modelling groups to realize such regular simulations. This should also be highlighted.
Agreed
, so highlighted
Lines 221-223 on high resolutions models contradicts itself. Please make clearer what you mean here.
This section has been substantially revised.
In section 2.3 I am surprised that emission-driven ESM (
scenarioMIP
) projections are not discussed more. This seems an important development on CMIP6.
We have added a sentence highlighting the new focus on CO2-emissions forced scenarios.
Lines 266 to 267: while modelling groups suggest that increase in fire over this century (Allen et al. 2024) seems to be an incomplete sentence.
We have revised the text accordingly.
For section 2.4 more discussion on potential MIP contributions to addressing this seems appropriate (e.g. TIPMIP, CDRMIP, C4MIP). I am also surprised there isn’t more mention of global warming overshoot scenarios in this section.
Agreed.  Details of various MIP contributions to the research questions, including overshoot scenarios are included in 3.4.5
Line 350: coupled carbon-climate ESMs importance in climate stabilization is mentioned. The importance in negative emission scenarios (warming overshoot) is likely even more important to mention.
Revised accordingly.
Lines 386-388: Will there be a coordinated effort to compare CMIP6 historical and scenario
forcings
to those in CMIP7? This would be a good thing to do (e.g. a forcing comparison MIP).
Comparison of CMIP6 historical forcings datasets with those of CMIP7 is underway as a
resh Eyes on CMIP
project. Comparison of model simulations driven by CMIP6 era versus
CMIP7 era forcings is
proposed in DAMIP v2.0 (Gillett et al, 2025).
Section 3.2. Will there be any stability/conservation requirements to meet for the piControl or
esm_piControl
runs?
We have revised this section
to include the C4MIP criterion of 10 PgC/century per component and Irving et al., 2021 result comparing ocean heat content drift in CMIP6 piControls
Lines 421 to 425: I don’t understand what is being proposed here. Please make it clearer.
If model X is used in a given science MIP, is it still an entry-card that model-X also runs the CMIP7 DECK? This is not clear
We have revised the text accordingly to clarify that the DECK remains “mandatory” for inclusion in ESGF.
Line 628: The REF is mentioned and somewhere else this is defined as a Rapid Evaluation Framework. What the REF is, and what it is intended for, needs to be more clearly explained.
We have added the following paragraph to the manuscript that explains the idea and structure of the Rapid Evaluation Framework (REF). We also added the reference for the REF that will be available in the GMD CMIP special issue as well.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC2
RC2
'Comment on egusphere-2024-3874'
, Chris Jones, 28 Jan 2025
The comment was uploaded in the form of a supplement:
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-RC2
AC1
'Reply on RC2'
, John Dunne, 12 Apr 2025
The authors deeply appreciate the reviewer's careful attention to the previous version of the manuscript and have incorporated there suggested changes throughout.
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
RC2: Chris Jones:
Review of CMIP7 documentation paper, by Dunne et al. Firstly to say that the CMIP panel and authors here are to be congratulated on the way they have approac
hed the task of developing CMIP7 plans in a complex landscape of requirements. CMIP has had a lot of success historically but requirements have grown and that growth is not sustainable so the new approach to consult with both users and providers and hence
prioritise
a more manageable, but still vital, set of simulations has been extremely welcome. The outreach, consulta
tion and dissemination of information has been excellent throughout and this paper contributes to that process. CMIP is a huge undertaking and changes the deployment of resource (both personal and computing/technology) in many, many modelling and research
centres
around the world. Careful design of what is requested and why is essential. I perform this review mainly in the context that the main aspects o
f CMIP7 and the Fasttrack, are already determined and too late to make substantial changes. Therefore, I focus on the presentation and explanation aspects with a few suggestions of things which could still be tweaked or clarified. My major comment is to as
k for more details on where the “Guiding Research Questions” came from? Are these the result of a consultation on the priority climate science questions? They resemble, but are not the same as, past WCRP grand challenges (e.g. on extremes or carbon cycle).
We have expanded the context for the
research questions. As we explain the questions w
ere developed by the CMIP panel as a way of making connections among experiments proposed during initial planning. They represent timely opportunities, based on new observations and evolving modeling capabilities, but do not constrain the research agenda.
They are specific to ESMs and hence narrower than e.g. WCRP Grand Challenges.
The way the paper is presented implies you started with these as a guiding set of questions and designed CMIP7 to answer them.
But in practice that wasn’t how I recall it happ
ening – so have these questions been retro-fitted to the experiments? E.g. line 132 says that CMIP7 design came from consultation and surveys – this is certainly true of the experiments – but did this consultation also take place for the science questions?
The questions
were
developed in parallel
to the
the fast track experimental design.  We have comprehensively rewritten the paragraph to better contextualize them
.  The fast track experiments were
proposed by the strategic ensemble design task team, in iterative consultation with the MIPs, stakeholders, and CMIP Panel
.  The questions, on the other hand, were developed within the CMIP Panel driven by our assessment of the
opportunities that recent developments and additional years of observations provide for enhanced
scientific
understanding
consistent with WCRP priorities as they apply specifically to modeling.
When I look over the CMIP7 web page there are lots of details and further links to the experiments, the task teams, the data request, the REF etc. Your figure 3 is r
eplicated on the website, which mentions the science questions linked to each FT experiment - but I cannot see the questions described or explained anywhere. It feels like these questions have been added after the experiment design. If these really are “gu
iding questions” that have guided, and are intended to keep guiding, CMIP I think they and their derivation need more prominence. It is not clear, for example, why you identify SST patterns over, say, cloud feedbacks, as a key driver of system sensitivity?
The text has been revised and context added, as described above.
Also, when you disc
uss a “carbon-water nexus” – is this just a catch-all for things not included in the other questions? The paragraphs of description of this question (sec 2.3) don’t appear to cover interactions between carbon and water cycles as implied by the “nexus” tag.
We have now clarified that
the reason for calling it a “water-carbon nexus” is that the CMIP Panel sees water and land carbon as the scope of a set of fundamentally linked problems.
So overall it would be good to articulate maybe how these priorities were arrived at. I am not
querying the importance of these questions – they are clearly crucial. But other aspects (for example on aerosol forcing and cloud processes) could also be seen as equally important, and CMIP7 will address many more than just these. Maybe it is better to
present the experiments first and then give some example high priority questions as examples of things which CMIP7 may help address – but it feels to be overselling the tag of “guiding questions” to imply that these came first and led to the CMIP7 design.
In addition to the responses above, we have renamed the questions from ‘guiding research questions’ to ‘
Fundamental
Research Questions
motivating Coupled Model intercomparison
’ to avoid confusion with how they were developed
Other suggestions I think are important: Model/simulation quality
. ii. Lines 374-375 –
it feels reasonable to suggest a degree of stability of a control run: +5ppm is probably OK – but better as a rate than an absolute – is this +-5ppm per century for example? In CMIP6 C4MIP requested drifts of less than 10 PgC per century in the main pools.
Agreed, we have revised this as was also requested by RC1.
But it would be consistent to also request stability criteria for other metrics – e.g. global T must drift by no more than +- XX degrees, or AMOC within XX Sv. It would be good to treat all major climate components similarly.
We have added the C4MIP guidance on carbon system equilibration, ocean heat content drift, and surface temperature.  Beyond
presenting these basic global metrics and requesting additional metrics be saved in the spin-ups (Appendix 1), the authors feel that adequate treatment of individual climate components is outside the scope of this paper.  We also have added context that t
he Rapid Evaluation Framework
will allow for a better assessment of different aspects of model performance and simulation for different potential end users and applications,
to
support more comprehensive
assessment of model performance by global mean temperature alone.
More importantly – I think it is unwise, however, to suggest arbitrary quality criteria for historical runs. Many ESMs may not hit the historical CO2 within 5ppm. See e.g.
Hajima
et al (https://egusphere.copernicus.org/preprints/2024/egusphere-2024-188/) for thorough evaluation of CMIP6 models in this respect. What happens if a model does not hit you 5ppm bounds – is it excluded from analysis?
We have changed this statement to “
As background, guidance is that modelling
centres
seek to
improve upon the
the historical CO
trend in their
esm-hist
relative to the CMIP6 ensemble which was found to be biased by -15 to +20 ppmv CO2 by 2014 (Gier et al., 2020) and has been the topic of much recent research  (e.g. Hajima et al., 2025)
We have also added the C4MIP
guidance on “stable” PI Control carbon budge
t, but I think it was just a target, not a “requi
ement” to put data on ESGF
, nor will any models be excluded from the ensemble if the criterion is not met
Users will decide whether to use the output in their analysis.
Again – as above, will you also specify acceptance criteria on other measures? – e.g. goodness of fit of the historical temperature record?
As now clarified per the above changes, the criteria are targets but not a “requirement” to put data on ESGF
, nor will any models be excluded from the ensemble if the criterion is not met
This would be a big change for CMIP – to specify acceptance criteria – I think it needs much more consultation before you introduce this.
We agr
ee with this concern and hope the above changes make it clear that CMIP7 will not exclude models based on performance assessment, but the Rapid Evaluation Framework aims to make it easier for end users to assess individual model fitness for various applica
tions
and regions
. This reflects consultations with developers and user groups in the SED task team which assessed that high level acceptance criteria or model subselection would not be appropriate for CMIP, due to the difficultly of defining all-purpose skill scores.
Ensembles – do you have any recommendations around generation of ensembles (from each model)? I
realise
you don’t want to rule out models by requiring large ensembles, but some experiments may benefit more than others from ensembles.
We agree with the reviewer and have added the following g
uidance “
While a
ny size of ensemble is acceptable to meet the mandatory DECK compliance for submission to ESGF, submission of multiple ensemble members of
historical
and/or
esm-hist
simulations are highly encouraged
as critical to a wide range of detection and attribution questions (see sections 2.1, 2.2, and 3.3). Large ensembles of the Atmospheric Model Intercomparison Project (AMIP) simulations forced by SST and Sea Ice Concentrations (SIC) are also encourag
ed.
Line 510 says that the FT “promotes the generation of ensembles” – but it is not clear how? FT does not appear to mention ensembles at all – but it could be a good opportunity to do so. It might be useful to provide guidance on this without mandating.
While t
he previous language was refe
ring to the CMIP ensemble, not
the
ensembling
of a single model, the point is well-taken, and w
e have chan
ged this to “
The A
ssessment
Fast Track experiment
s (Table 3)
were chosen as a practical balance among the number of participating models, and the complexity, resolution, and  number of ensemble members for each model
(Figure 1) to help distinguish the role of different processes and interactions and local versus remote drivers.
Likewise you could guide on choice of initial conditions (e.g. branch points best taken >XX years apart from the control run).
We
have added a section 3.4.6
on
the aspiration and
best practice
for initial condition ensemble generation on revision.  We agree with the reviewer that a
strategy which samples states of low frequency climate variability (such as 20 year intervals from esm-pictrl) is preferable to incremental perturbations to avoid
aliasing
internal variability in the pre-industrial ensemble mean.  We will also
highlight the importance of using a sufficiently spun-up control state when branching by recommending a desired maximum drift tendency
in section 3.2
As an example, quantifying TCRE from flat10 is a relatively large signal-to-noise activity. Ensembles may add little value to this. But quantifying ZEC from the flat10-zec simulation is a very small
signal
to
-noise and ensembles of this run could be really useful. See e.g.
Borowiak
et al (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024GL108654) which shows that ZEC derived from CMIP6 ZECMIP are subject to a level of uncertainty which CMIP6 did not consider due to lack of ensembles.
We have added clarification
that we are
referring
to multi-model ensembles to assess structural variability rather than sing
le
-model ensembles to assess internal variability should resolve this
.  We also agree with the reviewer that ZEC in particular (here assessed with esm-flat10-zec
), is a strong candidate for additional ensemble members for those centers who can afford it.  However, general practice for CMIP7 is that such decisions on ideal extended ensemble size are the responsibility of the corresponding MIP - in this case, C4MIP.
Spin-up. I’m not sure I understand the request to submit numerical r
esults from the spin-up of the models. What is the goal of this – how will they be used? “for curation” sounds like an odd phrase – why do these need curating? And what does “curation” involve – is this the same as archiving on a public database like ESGF?
We agree that we need to provide a better justification
than “curation” and have changed this to “publi
c dissemination”– indeed, our hope was that a “Fresh Eyes” team would perform an analysis of this dataset and that it would be in general useful information to researchers doing analysis on the potential role of spinup as a form of “structural uncertainty”
and “internal variability
Model selection. I think you are very wise not to do any prior screening or selection of models. The “hot models” paper you cite in Appendix 3 by
Hausfather
et al is rather simplistic to provide a table of “Y” and “N” on model screening based on sensi
tivity. A more nuanced analysis by Swaminathan et al (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024EF004901) shows clearly that many metrics of crucial interest are not related to ECS. Many high sensitivity models have very good evaluation
scores on many metrics and vice versa – having a lower ECS i
s certainly not a measure of quality. Any screening or selection needs to be much better understood and carried out case-by-case for the application in question. It cannot (yet) be done at the scale of CMIP which has so many downstream uses of the outputs.
We have moved the entirety of the model sub-selection section to Appendix 3 and added reference to the Swaminathan analysis.
Minor comments
Lines 102-107. This is a nice description of how CMIP has expanded and refined focus as both the expertise and need evolve
s. It feels that more knowledge of reversibility and symmetry is a big gap in our understanding of the climate system, and here could be a good place to articulate the need for more process exploration of how the system behaves under reversing of forcing.
We have added that the projections include “
a range of increasing and recovery trajectories
Line 216 says that CMIP7 focus on emissions-driven runs allows for more exploration of extremes under
stabilisation
– can you explain how so?
We have clarifi
ed that “
The increasing proportion of models driven by emissions rather than concentrations
will allow for novel investigation of extremes under climat
e stabilization
due to the demonstrated rigor of Transient Response to Cumulative CO2 Emissions (TCRE; Matthews et al., 2009) and climate stability under zero emissions (MacDougall et al., 2020)
Sec 2.4 on points of no return – is there a reason not to call this either “tipping points” or “irreversibility” which have become much more common phrases for
these topics. Wood et al (2023 - https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022EF003369) is a good reference here for the framing of high impact/low likelihood outcomes and the need for research spanning different dimensions of this topic.
We have changed “Points of No Return/Ratching” to “Tipping Points” and added reference to Wood et al., 2023.
Line 297 onwards – describing the CMIP7 DECK intent. It is worth being explicit here that the goal is only to
characterise
the response to _increasing_ forcing. It was a deliberate decision not to add a DECK experiment to
characterise
the system response to reducing forcing. (This remains a gap in CMIP7 – noting that flat10-cdr can only be performed by ESMs)
Discussion of priorities for zero and negative emissions forcings experiments is included in the description of the “Assessment Fast Track”
Table 1 is important. A couple of notes/suggestions - For esm-piControl the forcing is described as “emissions” - I wonder if this should be better descri
bed as “interactive CO2” or “simulated CO2” because of course there are no emissions. So even though we informally describe this as “emissions mode” it risks implying that there are some emissions being applied. Or at least specify that CO2 emissions are z
ero. -
We have clarified that we have “
expanded protocol to facilitate participation with ESMs
that close the carbon budget and are
capable of running with interactive CO
forced by emissions
(including positive, zero, and negative scenarios) in addition to prescribed concentrations
and added “zero emissions” to Table 1.
Typo – looks like the 1% and historical lines have transposed the solar/volcanic forcing entries
Fixed
Line 355. Can you clarify the need for 100 years of control run before any experiments are branched off? I don’t recall this being requested in CMIP6
We have add
ed “
One change in CMIP7 is the explicit recommendation for modeling centers to provide at least 100 years of their
piControl and/or esm-ctrl
before the corresponding branching points for 1pct, 4xAbrupt and historical perturbations to allow users to better characterize drift.
Line 364 – can you explain why conc-driven control run is required if the esm-control is stable? That seems redundant
We have changed this guidan
ce to “
Note that a
piControl
simulation forced by the same CO2 concentration is also encouraged to account for any carbon-climate coupling differences between esm-poControl.
The concern here is not only that the esm-piControl might not be stable, but that it may have a fundamentally different vegetation state than would be in the piControl
depending on the treatment of canopy CO2 under the diurnal cycle and regional variability.
Table 2 is useful – but it feels odd to name individuals. What happens as/when a person moves job
etc
? maybe a named group in an
organisation
is more useful.
These were placeholders in the previous version and have been replaced with citable references.
Table 2, N deposition. Will this be speciated into dry/wet and
oxidised
/reduced reactive nitrogen?
This level of specificity cannot be answered at this time as it remains a placeholder until the dataset is provided.
Line 405. The section on spin-up – it is not clear how the strap line “
characterising
model diversity” is relevant to this sub-section. Maybe just call the section “ocean and land spin-up” (where land here includes land ice/cryosphere?)
Removed
Line 470 – is “SCP” a typo? “SSP”?
Yes, fixed
Table 3 is super useful and important – it will be a very good easy-look-up of the whole set of FT s
imulations. But it is really big! It is important that it is produced and typeset to be easily readable given how big it is. I feel this comment may be more for the journal/typesetters than the authors – I hope you can find a way to make it well readable.
Agreed
Table 3 – scenario time period. You quote that scenarios run to 2100 – is this decided? I thought it would be 2125, or at least this was still being di
scussed. (personal opinion – it drives me mad that IPCC figures and values can only ever quote a climate – i.e. 20-year average – for 2090. So an extension to a minimum of 2110 seems vital so that we can actually quote a 2100 value for projected results!)
We have clarified with ScenarioMIP that the formal IAM “Realistic Scenarios” are driven by population and Gross Domestic Product data that only extends to 2100.  However, all such “Scenarios” will continue past 2100 as more idealized
“Extensions” to at least 2150 and in some cases beyond to 2500
Appendix 1 – requested spin-up metrics. As per my comment above I’m not yet convinced why you need to request these. But if you do, then to close the land carbon cycle you should also requested
cProduct
. Even if the control run has no land-use _change_ it will still have land use, and the product pools may well be non-zero.
cLand
is then the sum of
cVeg+cLitter+cSoil+cProduct
Added to Appendix 1
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC1
CC1
'Comment on egusphere-2024-3874'
, Mark Zelinka, 28 Feb 2025
Please see attachment.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC1
AC3
'Reply on CC1'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated most of the suggested changes
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC1, Mark Zelinka:
Review of “An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment” by Dunne et al [egusphere-2024-3874]
Summary The authors motivate and describe the seventh iteration of CMIP, including the new Fast Track set of experimen
ts which serves the IPCC. The paper is mostly effective in achieving these goals, but there are a few areas needing improvement. This review largely deals with issues relevant to the Cloud Feedback Model Intercomparison Project (CFMIP). Mark Zelinka Maria
Rugenstein
Alejandro
Bodas
-Salcedo Jennifer Kay Paulo
Ceppi
Mark Webb on behalf of the CFMIP Scientific Steering Committee
Major Comments
Section 2.1 describes the first of four guiding questions in
CMIP7, dealing with pattern effects. A large part of the reason the scientific community is interested in pattern effects is because of the science conducted by members of the CFMIP community (Andrews et al. 2015; Zhou et al. 2016; Andrews and Webb 2017;
Ceppi
and Gregory 2017; Andrews et al. 2018, 2022), facilitated by CFMIP experiments like
amip-piForcing
(Andrews 2014; Webb et al. 2017), and illuminated by CFMIP diagnostic
s (including satellite cloud simulator diagnostics that reveal the diverse cloud responses to warming patterns). The “Why expect progress now?” section completely excludes a role for CFMIP while instead mentioning the roles that can be played by DAMIP and
AerChemMIP
. The focus here seems to be more on what causes warming patterns (a worthy goal), but the understanding of the climate response (i
ncluding but not limited to clouds) to diverse warming patterns is essential to this problem and should not be neglected. Moreover, the surface temperature response pattern is likely to be at least partly affected by how clouds and their radiative effects
feed back
on warming patterns (Myers et al. 2017;
Erfani
and Burls 2019;
Rugenstein
et al. 2023; Espinosa and Zelinka 2024;
Breul
et al. 2025) and are involved in teleconnections that propagate surface temperature anomalies from high to low latitudes (Kang et al. 2023; Hsiao et al. 2022). We suggest better acknowledging CFMI
P contributions to the current understanding of the pattern effect and explicitly calling out the role that CFMIP can play in making progress. We also note that the first sentence of this paragraph is rather hard to parse and is formulated rather weakly (“
xyz
may all help” – it remains unclear with what and how).
We have revised the discussion of all four questions to be more brief and pointed. In formulating question 1 we were anxious to co
mmunicate that the SST pattern problem was more expansive than cloud feedbacks but took that thinking too far. The revised text seeks to provide a more balanced and compact discussion of connections between ocean temperatures, clouds, and other processes.
The discussion of opportunities ha
s likewise been sharpened to focus on general ideas rather than contributions from individual MIPs. We do want to emphasize new opportunities, precluding a discussion of previously-performed experiments and diagnostics no matter how valuable they’ve been.
CFMIP requests that the abrupt CO2 experiments (4x, 2x, and 0.5x) be run out to a minimum of 300 years, and we strongly encourage modeling groups to run beyond that (which could be noted at L331). Note that CFMIP req
uested this minimum duration as part of the FastTrack consultation process, which was then adapted into the request for the abrupt CO2 experiments. (See the abrupt-4xCO2 request: https://airtable.com/embed/appVPW6XAZfbOZjYM/shrqq9I4NJThwOT9W/tblkc1lkKEtiY
Kcho
/viw9PLlrOnfUMcvHw/recl01t59HM8jz8ax.) Table 1 currently lists the abrupt-4xCO2 run as extending for “150+ (300)”, though it is not clear what this nomenclature means exactly.  We request that “150+” be replaced with “300+” to make it clear that 30
0 years is the desired minimum, and “(300)” be replaced with “(1000)”.
We have adopted this suggestion.
The reasons for requesting that the abrupt CO2 runs be integrated for a minimum of 300 years with strong encouragement to extend beyond that are manifold:  ○ Better ECS quantification:
Rugenstein
and
Armour
(2021) quantified with 10 equilibra
ted CMIP5 and CMIP6 models that 400 years are necessary to estimate the true equilibrium climate sensitivity within 5% error. The model spread in equilibration is large and CMIP6/7 models probably need longer to equilibrate due to the "hot model problem" (
Hausfather
et al. 2022), which partly consists of temperature- and time-dependent feedbacks. Kay et al (2024) estimated an equilibrium timescale of 200+ years for 2xCO2 and 500+ years for 0.5xCO2, noting important implications for paleo cold cl
imate constraints (e.g., LGM) that can only be understood if the simulations are long enough. ○ Understanding centennial coupled behavior: Simulations of at least 300 years are necessary for estimating the pattern effect, ocean heat uptake and convection (
Gjermundsen
et al. 2021), AMOC recovery (
Bonan
et al. 2022), and Equatorial Pacific response timescales (
Heede
et al. 2020). ○ Understanding and quantifying feedback temperature dependence: This is not well understood, could lead to tipping points and is, after the pattern effect and cloud feedbacks, the biggest unknown in estimating ECS, understanding
hot models, and high-risk futures (Bloch-Johnson et al. 2021). It is very hard to quantify because it is obscured by the pattern effect, but is aided by longer simulations.  ○ Practical considerations: Running existing simulations for longer is typically e
asier than running new simulations. Thus, if computing time is available at modeling centers, it is strongly encouraged that pre-industrial control and abrupt CO2 runs be extended as long as possible. Anecdotally, many of the model centers contributing to
LongRunMIP
Rugenstein
et al. 2019) had independently run their simulations for longer than 150 years and had the data sitting around, suggesting that in many cases such long simulations are already being performed or are trivial to extend. Currently, ~52 groups are using the
LongRunMIP
simulations for studies on internal variability, global warming levels, feedback quantification, paleo climate, oceanography, and training for data-driven machine learning
approaches
We have made the change to request 300 years.  Discussion of LongrunMIP, however, is outside the scope of the present work.
Minor Comments
L34: Should it be “...include experiments to diagnose historical…”?
We have r
ephrase from “
include historical, effective radiative forcing, and focus on
CO
-emissions-driven experiments
” to
“...
evaluate historical changes and effective radiative forcing”
Introduction section: This section may be too long. The main audience of this paper is the science community that want to understand the rationale and details of the experimental design, not the history of CMIP iterations.
The introduction has been shortened and, we hope , sharpened.
L90: should be Zelinka et al 2020
Corrected
L125-127: Suggest being more specific and use “modeling community”, rather than “research community” as a whole. The research community benefits as a whole, but it doesn't share the burden.
Adopted
L130” “... the present experimental design includes some components …” This point is hard to parse.
This section is modestly rewritten for clarity
The entire paragraph reads well though, but the role DECK plays in climate services might need more
highlighting. The remainder of the paper is phrased mostly in terms of science questions and the role climate service plays in there remains somewhat unclear.
L140: Would it be worth listing a few big questions which were answered mainly or only through past CMIP cycles?
We have actually done the opposite in removing much of the introduction motivation to reduce the length in response to the reviewers but cite Durack et al., 2025 for CMIP history
L265-266: something wrong with the phrasing here
Changed to “Tipping Points”
Table 1: It's unclear why the request is for a small ensemble for historical and a large ensemble for
amip
We have added a new section to give more explicit guidance on ensembles (3.4.6).
Section 3.1.2: It would be helpful to see a plot of how the new forcing datasets differ from those used in CMIP6 during the 1850-2014 period.
Forcings will be the subject of their own set of publications.
L310/Fig.2: This schematic might benefit from a vertical time axis. The current version leaves a lot of room for interpretation. What are the small orange arrows? What is the connection between DECK and AR7 Fast Track?
The figure has been revised
L355: “year 100 or later of piControl” – is the rationale for this given anywhere in the manuscript?
Explained as similarly requested by RC2.
L383: The historical and AMIP simulations end in 2021 according to Table 1.
Corrected
L498: CFMIP deals with cloud and non-cloud feedbacks (all radiative feedbacks)
orrected
L501: Figure 3 excludes RFMIP from the “Characterization” box, yet it is highlighted in this Characterization section, which is confusing.
orrected
L510-511: Very hard to parse this statement
Clarified
L516: “Forcing” should be “Feedback”
Corrected
L517: I believe you mean “CFMIP” rather than (or in addition to) “CMIP” here
Corrected
L541: Missing section number
Text has been moved to figure caption
Table 3, amip-p4K: missing word here? “feedbacks observed”
Column removed for space considerations.
Table 3, amip-p4K: the number of years should be 44 (1979 - 2022)
Corrected to 43 (1979-2021)
Table 3,
amip-piForcing
: the number of years should be 153 (1870 - 2022)
Corrected to 152 (1870-2021)
L638: 4 should be 3
Corrected
Appendix 1 table: Suggest specifying top of atmosphere albedo when referencing
rsdt
and
rsut
Added
L712-713: Might be some missing words here
Revised language
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC3
CC2
'Comment on egusphere-2024-3874'
, Cath Senior, 28 Feb 2025
The comment was uploaded in the form of a supplement:
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC2
AC4
'Reply on CC2'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated both reference to the many national assessments CMIP has supported as a webpage hosted by the IPO and changed the label of the "Assessment Fast Tract" to align with this more general utility
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC2, Cath Senior:
Comment on Dunne et al 2025: ‘An evolving Coupled Model Intercom
parison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment’  This is a very timely and important paper that lays out the evolution of the CMIP project and details the plans for its next phase, CMIP7. I have a couple of comments;
An important part of the design of CMIP7 that differs from earlier phases is the separation of policy relevant simulations (the Fast-Track) from the research orientated simulations designed to address the scientific questions and provide a rich
characterisation
of climate model capability to support future development. I feel the thinking behind this n
ew development could be made more explicit. In particular, how this came about - at least in part- from the feedback from modelling groups about the burden of CMIP6 simulations. Engagement and support for modelling groups contributing to CMIP7 will be crit
ical and documenting more clearly the influence they had on the design of CMIP7 will give reassurance to the community that they can achieve a balance between delivering to their national agendas as well as engagement in international community science.
The motivation section has been comprehensively revised.
There are numerous references to the critical role that CMIP has played in underpinning the IPCC assessments. This is absolutely right and an important point to be made. I also think the authors have tried to carefully lay out that CMIP has - and will c
ontinue to support the national and international science communities. However what is perhaps missing is a third important role that the policy relevant simulations have played in supporting the national assessments of many countries. A quick question to
ChatGPT
(!) gives the following 12 countries/communities that have used CMIP scenarios to deliver their national assessments. It would be good to document this important role
emphasising
the support CMIP plays for national agendas.  a. United States • National Climate Assessment (NCA) • Led by the U.S. Global Change Research Program (USGCRP) • Uses CMIP5 and CMIP6 p
rojections for national and regional climate impact assessments. • Latest report: Fifth National Climate Assessment (NCA5, 2023) b. United Kingdom • UK Climate Projections (UKCP) • Developed by the Met Office Hadley Centre • Uses CMIP5 (UKCP18) and CMIP6 (
UKCPNext
) to provide probabilistic and
highresolution
UK-specific projections. c. European Union • Eur
opean Climate Risk Assessment (EUCRA) • Managed by Copernicus Climate Change Service (C3S) and European Environment Agency (EEA) • Uses CMIP6 projections within EURO-CORDEX for downscaled regional assessments. d. Canada • Canada’s Climate Change Report (CC
CR) • Produced by Environment and Climate Change Canada (ECCC) • Uses CMIP5 and CMIP6 for projections at the national level. e. Australia • State of the Climate Report (by CSIRO & Bureau of Meteorology) • Climate Change in Australia Projections • Uses CMIP
5 and CMIP6, downscaled for Australian conditions. f. Germany • GERICS Climate Fact Sheets (by the Climate Service Center Germany) • German Climate Change Assessment Report • Uses CMIP6 projections, often combined with EURO-CORDEX downscaling. g. France •
Drias
Future Climate Scenarios (by
Météo
-France) • GREC (Regional Climate Group) Reports • Uses CMIP5 and CMIP6, combined with CNRM-CM models and EURO-CORDEX.
h. China • China’s Third National Climate Change Assessment Report • Uses CMIP5 and CMIP6 within China’s regional modeling framework (BNU-ESM, FGOALS).
. Japan • Climate Change in Japan Report (by Japan Meteorological Agency, JMA) • Uses CMIP6 and the JRA-55 reanalysis dataset. j. New Zealand • NIWA Clim
ate Change Projections • Uses CMIP5 and CMIP6, often with regional downscaling via VCSN (Virtual Climate Station Network). k. South Africa • South African Risk and Vulnerability Atlas (SARVA) • Uses CMIP5 and CORDEX-Africa for regional climate projections.
In acknowledgment of the broad utility of this effort for assessment beyond just the IPCC, we have changed “AR7 Fast Track” to “Assessment Fast Track” and the IPO can set up a web page on the use of CMIP in National Assessments
at
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC4
CC3
'Comment on egusphere-2024-3874'
, Annalisa Cherchi, 01 Mar 2025
Broad and comprehensive article to describe forthcoming CMIP7 effort. Some comments below:
- among the challenging questions, section 2.3 about the water-carbon-climate nexus does not fully exploit water and the importance of the hydrology processes. We know there are still weaknesses and limitation in this (i.e. Douville et al 2021 in last IPCC AR6 and beyond) but there are now more efforts in modelling centres in this direction;
- Fig 1: the term multiverse seems not fully appropriate as what is shown need and depends on coupling and feedbacks between components and processes. Even here the hydrology part is not fully exploited/described. For example, monsoons are missing among the phenomena. The land interaction is expressed mostly in terms of vegetation and carbon cycle but land is also interaction with the atmosphere via moisture and heat exchanges. In the caption of the figure, red and blue are mentioned as colors for atmosphere and ocean, what about land and cryosphere for example? Also related to this, in lines 50-53  model development need to consider and properly represent the coupling between the new components, cryosphere but also improved land-hydrology
- In term of outline of the paper, the key points highlighted in the abstract (lines 33-38) are not fully exploited within the text, either in terms of sectioning and mostly in the summary. In addition the summary (section 5) is not a real summary but mostly contains points of discussion and also new features of this CMIP cycle not described in the sections before, ie. Fresh Eyes on CMIP. Also the concept of emulators would deserve a bit more of clarification/explanation. Eventually these new aspects could be more extensively described in this manuscript, leaving some details of the experiments to forthcoming papers. For example, there are references to details of ScenarioMIP that is not published yet. There is probably no need for those details at this stage as they will described and explained in details once the reference papers will be ready. A description (outline) of the content of the manuscript could be useful at the end of the Introduction.
- Overall there are some repetitions (mostly of concept) that could be avoided to simplify the reading (for example, lines 60-76 contains repetitions in the two paragraph and the text could be rewritten and lightened), there are some typos in section 5 (section numbering).
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC3
AC5
'Reply on CC3'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated most of the suggested changes
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC3, Annalisa
Cherchi
Broad and comprehensive article to describe forthcoming CMIP7 effort. Some comments below:
- among the challenging questions, section 2.3 about the wate
r-carbon-climate nexus does not fully exploit water and the importance of the hydrology processes. We know there are still weaknesses and limitation in this (i.e. Douville et al 2021 in last IPCC AR6 and beyond) but there are now more efforts in modelling
centres
in this direction;
Similar to comment by RC2, we have tried to make this link more explicit.
- Fig 1: the term multiverse seems not fully appropriate
as what is shown need and depends on coupling and feedbacks between components and processes. Even here the hydrology part is not fully exploited/described. For example, monsoons are missing among the phenomena. The land interaction is expressed mostly in
terms of vegetation and carbon cycle but land is also interaction with the atmosphere via moisture and heat exchanges. In the caption of the figure, red and blue are mentioned as colors for atmosphere and ocean, what about land and cryosphere for example?
We have revised the Figure.
Also related to this, in lines 50-53  model development need to consider and properly represent the coupling between the new components, cryosphere but also improved land-hydrology
We have added mention of cryosphere and land-hydrology interactions as key efforts in improving model comprehensiveness.
- In term of outline of the paper, the key points highlighted in the abstract (lines 33-38) are not fully exploited within the text, either in terms of sectioning and mostly in the summary.
The abstract has been revised.
In addition the summary (section 5) is not a real summary but mostly contains points of discussion and also new features of this CMIP cycle not described in the sections before,
ie
. Fresh Eyes on CMIP.
The comment on Fresh Eyes has been moved to the earlier sections.
Also the concept of emulators would deserve a bit more of clarification/explanation. Eventually these new aspects could be more extensively described in this manuscript, leaving some details of the experiments to forthcoming papers.
We have added a statement on emulators to the introduction
and reframed this section.
For example, there are references to details of
ScenarioMIP
that is not published yet. There is probably no need for those details at this stage as they will described and explained in details once the reference papers will be ready.
Now that the
ScenarioMIP
manuscript has been released, we have gone back and
made
sure this description is in alignment and not redundant or conflicting.
A description (outline) of the content of the manuscript could be useful at the end of the Introduction.
We end the introduction with a sentence describing the following sections
- Overall there a
re some repetitions (mostly of concept) that could be avoided to simplify the reading (for
example
, lines 60-76 contains repetitions in the two paragraph and the text could be rewritten and lightened), the
re are some typos in section 5 (section numbering).
We have deleted these repetitions.
[L60 delete:
As an international research activity within WCRP,
Corrected
Line73 delete:
As a publicly available ensemble including state-of-the-art coupled model contributions from centers around the globe,
Corrected
L557-
563 could be deleted].
Deleted
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC5
Share
Interactive discussion
Status
: closed
Comment types
AC
– author |
RC
– referee |
CC
– community |
EC
– editor |
CEC
– chief editor

: Report abuse
RC1
'Comment on egusphere-2024-3874'
, Anonymous Referee #1, 22 Jan 2025
CMIP has been a cornerstone of international Earth system modelling for the past 3 decades, delivering key science support to IPCC Assessments, while advancing the development of Earth system and climate models and their use to understand past and future evolution of the climate system. CMIP has been grappling with the dual demands of delivering science support (mainly future projections) to international climate change assessments, and the growing climate service sector, while also coordinating research-led experiments to advance Earth system models and scientific understanding. This dual set of demands has caused CMIP to grow significantly over its past two iterations (CMIP5 and CMIP6) in terms of MIPs, experiments to be run, and data to be archived, with consequences for contributing modelling groups. All of this has (and is) being done through short-term, uncoordinated (at an international level) funding, supporting the development of forcing data, realization of experiments, and maintenance of the underpinning infrastructure. Such a situation is difficult to maintain, something had to change in the organization of CMIP going forwards. CMIP7, as described in this paper, is a step towards such a change, with a first attempt to separate simulations intended to support international assessments (e.g. IPCC AR7 and the CMIP7 Fast Track) from other experiments intended to advance the science and modelling of the climate system (e.g. CMIP7 community MIPs). This paper is therefore timely and important. From this perspective the paper clearly needs to be published, though not in its present form. Below I outline a number of points that need addressing before the paper is suitable for publication. I hope this will increase its value for the research community and for CMIP more generally.
Major points.
The paper is very wordy, with lots of long sentences and lists justifying why things have been, or will be, done in a certain way. This makes the paper tedious to read. Addressing this could reduce the length of the paper (easily) by 25% and make it a more enjoyable read! As an example, lines 73 to 122 could be reduced to ~10 lines and still deliver the key messages. Section 3.5 adds very little. While the CMIP IPO is a very good development and is doing a great job supporting the development of CMIP7, I am not convinced much of section 4.1 is really needed in a paper. Section 4.2 is also very wordy and rambling. This is true for a lot of the introduction, which could be reduced in length without losing much.
There are quite a few examples of repetition. e.g. lines 84-86, lines 93-94, lines135 to 138, 144-145, 440-445. This needs to be reduced throughout the manuscript.
There are also numerous examples of sentences beginning with long justifications for what is to come based on what has already been said: e.g. Line 93:
“In addition to the systematic characterization of climate mechanisms….”
or line 110:
“Beyond direct contribution to national and international climate assessments…
and lots of similar examples. I don’t think these are needed and can be deleted in lots of places.
The paper has lots of examples explaining how CMIP has (and will) be supported by, aligned with, and deliver to, WCRP. While CMIP is a WCRP-sponsored activity and this is important, it is likely sufficient to say this once (most people know this already) and not have numerous motivations and links to WCRP listed. I suggest reducing these (examples include lines 60-63, 110-120, and others)
The 4 research questions are all interesting, and important, What the paper lacks is a clear link between these research questions and the experiments proposed (either as part of the Fast Track or within the community MIPs). Will there be new experiments designed to specifically address some of the research questions? How will the existing experiments advance understanding? In some cases this is clear (e.g. CO2-emission driven models will likely expose (and lead to improvement in) carbon-cycle biases and feedbacks more thoroughly than concentration-driven models) but in many instances it isn’t. The connection between the guiding research themes and the experiments planned in CMIP7 needs to be better explained.
In two places (line 180 and line 645) there is an assertion high ECS models in CMIP6 have been proven to be incorrect and by implication these models are worse than lower ECS models, or just wrong. I don’t agree with this assertion. A high ECS Earth (>5K) is very unlikely, but it has not been conclusively ruled out. If anything, recent increased warming and suggestions of a possible role for changing cloud-radiation processes in this increased warming, may increase the likelihood of a high ECS world. In addition, some of the CMIP6 models with high (increased relative to CMIP5) ECS have been shown to realize this because of improvements in specific cloud feedback processes that were previously (erroneously) balancing other incorrect feedback magnitudes leading to a lower ECS through compensating errors. With removal (improvement) in one aspect of this compensation, ECS has increased. While the higher ECS
may
not be correct, the underlying processes/feedbacks are likely simulated better. To me this is a model improvement. It would be a pity if CMIP7 discouraged groups from making such important model improvements, even if that risked increasing their model ECS value. I suggest modifying these two assertions.
The general aspiration for CMIP7 to separate out policy-relevant simulations (e.g. Fast Track for IPCC AR7) and longer-term MIPs aimed at specific research questions, is a good one. The paper could do a better job explaining and motivating this separation, including how modelling groups could best contribute to either or both parts of CMIP7.
Table 3 is very long and poorly explained. Could it be presented in a more engaging manner? If the main explanations for the different experiments are in the references listed in the table, please let the reader know that. Also, I think there may be some errors in the table. e.g. (i) are piClim-histaer and piClim-histall 30y AMIP or 172y AMIP runs? (ii) For piClim-X and SSPXSST-SLCF I don’t see how feedbacks can be assessed (as suggested in the table) if the models are run in prescribed-SST mode. At least the classical definition of a feedback modifying the SST-response to a given forcing and thus also modifying the forcing itself, cannot be realised in prescribed-SST mode. (iii) for piClimSLCF it is unclear what happens to the non-SLCF emissions. Are these held at PI values? A bit better explanation of this table would help the reader.
Minor Points
On the “guiding research questions” I don’t understand why these are “ephemeral” (line 155).
Regarding the Fast Track experiments, it is not clear if groups are recommended to do everything in either emission-mode or concentration-mode. For example, are there plans for DAMIP to support both emission-driven and concentration-driven experiments? This is not made clear in the explanation of table 3.
Line 494-495: How will DAMIP support analysis of individual forcings in the context of an interactive carbon cycle? Will DAMIP run a coordinated set of experiments for emission-driven ESMs?
Line 128 talks about the lack of infrastructure for a sustained approach. This is also true with respect to funding of modelling groups to realize such regular simulations. This should also be highlighted.
Lines 221-223 on high resolutions models contradicts itself. Please make clearer what you mean here.
In section 2.3 I am surprised that emission-driven ESM (scenarioMIP) projections are not discussed more. This seems an important development on CMIP6.
Lines 266 to 267:
while modelling groups suggest that increase in fire over this century (Allen et al. 2024)
seems to be an incomplete sentence.
For section 2.4 more discussion on potential MIP contributions to addressing this seems appropriate (e.g. TIPMIP, CDRMIP, C4MIP). I am also surprised there isn’t more mention of global warming overshoot scenarios in this section.
Line 350: coupled carbon-climate ESMs importance in climate stabilization is mentioned. The importance in negative emission scenarios (warming overshoot) is likely even more important to mention.
Lines 386-388: Will there be a coordinated effort to compare CMIP6 historical and scenario forcings to those in CMIP7? This would be a good thing to do (e.g. a forcing comparison MIP).
Section 3.2. Will there be any stability/conservation requirements to meet for the piControl or esm_piControl runs?
Lines 421 to 425: I don’t understand what is being proposed here. Please make it clearer.
If model X is used in a given science MIP, is it still an entry-card that model-X also runs the CMIP7 DECK? This is not clear.
Line 628: The REF is mentioned and somewhere else this is defined as a Rapid Evaluation Framework. What the REF is, and what it is intended for, needs to be more clearly explained.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-RC1
AC2
'Reply on RC1'
, John Dunne, 12 Apr 2025
The authors deeply appreciate the reviewer's careful attention to the previous version of the manuscript and have incorporated there suggested changes throughout.
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
RC1: Anonymous:
CMIP has been a corn
erstone of international Earth system modelling for the past 3 decades, delivering key science support to IPCC Assessments, while advancing the development of Earth system and climate models and their use to understand past and future evolution of the clim
ate system. CMIP has been grappling with the dual demands of delivering science support (mainly future projections) to international climate change assessments, and the growing climate service sector, while also coordinating research-led experiments to adv
ance Earth system models and scientific understanding. This dual set of demands has caused CMIP to grow significantly over its past two iterations (CMIP5 and CMIP6) in terms of MIPs, experiments to be run, and data to be archived, with consequences for con
tributing modelling groups. All of this has (and is) being done through short-term, uncoordinated (at an international level) funding, supporting the development of forcing data, realization of experiments, and maintenance of the underpinning infrastructur
e. Such a situation is difficult to maintain, something had to change in the organization of CMIP going forwards. CMIP7, as described in this paper, is a step towards such a change, with a first attempt to separate simulations intended to support internati
onal assessments (e.g. IPCC AR7 and the CMIP7 Fast Track) from other experiments intended to advance the science and modelling of the climate system (e.g. CMIP7 community MIPs). This paper is therefore timely and important. From this perspective the paper
clearly needs to be published, though not in its present form. Below I outline a number of points that need addressing before the paper is suitable for publication. I hope this will increase its value for the research community and for CMIP more generally.
Major points.
The paper is very wordy, with lots of long sentences and lists justifying why things have been, or will be, done
in a certain way. This makes the paper tedious to read. Addressing this could reduce the length of the paper (easily) by 25% and make it a more enjoyable read! As an example, lines 73 to 122 could be reduced to ~10 lines and still deliver the key
messages
Thanks for your comment. We have substantially edited the manuscript with this in mind.
Section 3.5 adds very little.
Agreed. We have deleted this
section and refer to Appendix 3.
While the CMIP IPO is a very good development and is doing a great job supporting the development of CMIP7, I am not convinced much of section 4.1 is really needed in a paper.
We agree that the section was originally too long and have shortened it in the revision and added explicit mention to the task teams, we disagree on its outright removal.  T
he authors feel that the role of the IPO and associated task teams is critical to publicly acknowledge as part of the formal scientific record in response to WMO Resolution 67 and community surveys requesting increases in support for community engagement.
Section 4.2 is also very wordy and rambling. This is true for a lot of the introduction, which could be reduced in length without losing much.
Agreed. The text has been reduced and tightened throughout the paper.
There are quite a few examples of repetition. e.g. lines 84-86, lines 93-94, lines135 to 138, 144-145, 440-445. This needs to be reduced throughout the manuscript.
Agreed. We have
reduced
text in the suggested lines.
There are also numerous examples of sentences beginning with long justifications for what is to come based on what has already been said: e.g. Line 93: “In ad
dition to the systematic characterization of climate mechanisms….” or line 110: “Beyond direct contribution to national and international climate assessments…and lots of similar examples. I don’t think these are needed and can be deleted in lots of
places
Agreed.  We have removed these sentences and reduced and revised the text substantially.
The paper has lots of examples explaining how CMIP has (and will) be supported by, aligned with, and deliver to, WCRP. While CMIP is a
WCRP-sponsored activity and this is important, it is likely sufficient to say this once (most people know this already) and not have numerous motivations and links to WCRP listed. I suggest reducing these (examples include lines 60-63, 110-120, and
others
We have reduced these throughout the paper.
The 4 research questions are all interesting, and important, What the paper lacks is a clear link between these research questions and the experiments proposed (either as part of the Fast Track or within the
community MIPs). Will there be new experiments designed to specifically address some of the research questions? How will the existing experiments advance understanding? In some cases this is clear (e.g. CO2-emission driven models will likely expose (and l
ead to improvement in) carbon-cycle biases and feedbacks more thoroughly than concentration-driven models) but in many instances it isn’t. The connection between the guiding research themes and the experiments planned in CMIP7 needs to be better explained.
We have
added some context for the science questions, including explaining their provenance and connection to e.g.
WCRP 2019-2028 Strategic Plan Science Objectives (
. We explain in particular that the questions are an assessment of timely opportunities rather than a constraint on the research agenda.  We also explicitly refer readers to section 3.4.5 where the connections between questions and experiments
are made.
In two places (line 180 and line 645) there is an assertion high ECS models in CMIP6 have been proven to be incorrect and by implication these models are worse than lower ECS models, or just wrong. I don’t agree wit
h this assertion. A high ECS Earth (>5K) is very unlikely, but it has not been conclusively ruled out. If anything, recent increased warming and suggestions of a possible role for changing cloud-radiation processes in this increased warming, may increase t
he likelihood of a high ECS world. In addition, some of the CMIP6 models with high (increased relative to CMIP5) ECS have been shown to realize this because of improvements in specific cloud feedback processes that were previously (erroneously) balancing o
ther incorrect feedback magnitudes leading to a lower ECS through compensating errors. With removal (improvement) in one aspect of this compensation, ECS has increased. While the higher ECS may not be correct, the underlying processes/feedbacks are likely
simulated better. To me this is a model improvement. It would be a pity if CMIP7 discouraged groups from making such important model improvements, even if that risked increasing their model ECS value. I suggest modifying these two assertions.
The discussion
around high ECS has been removed.
We also
highlight that the CMIP7 effort to provide the Rapid Evaluation Framework allows for a better assessment of different aspects of model performance and simulation for different potential end users and applications
The general aspiration for CMIP7 to separate out policy-relevant simulations (e.g. Fast
Track for IPCC AR7) and longer-term MIPs aimed at specific research questions, is a good one. The paper could do a better job explaining and motivating this separation, including how modelling groups could best contribute to either or both parts of CMIP7.
We appreciate the support of the reviewer for this concept and have tried to make the experimental design and connection to the research questions more clear.
Table 3 is very long and p
oorly explained. Could it be presented in a more engaging manner? If the main explanations for the different experiments are in the references listed in the table, please let the reader know that. Also, I think there may be some errors in the table. e.g. (
) are
piClim-histaer
and
piClim-histall
30y AMIP or 172y AMIP runs? (ii) For
piClim
-X and SSPXSST-SLCF I don’t
see how feedbacks can be assessed (as suggested in the table) if the models are run in prescribed-SST mode. At least the classical definition of a feedback modifying the SST-response to a given forcing and thus also modifying the forcing itself, cannot be
realised
in prescribed-SST mode. (iii) for
piClimSLCF
it is unclear what happens to the non-SLCF emissions. Are these held at PI values? A bit better explanation of this table would help the reader.
We have taken your comments on board and tried to streamline and clarify the information in the table and corrected for errors
and inconsistencies
Minor Points
On the “guiding research questions” I don’t understand why these are “ephemeral” (line 155).
We have
made explicit why the questions are ephemeral: “
These questions are more focused on the capabilities of current ESMs
- and hence more ephemeral and timely -
than those posed for CMIP6
...”
Regarding the Fast Track experiments, it is not cl
ear if groups are recommended to do everything in either emission-mode or concentration-mode. For example, are there plans for DAMIP to support both emission-driven and concentration-driven experiments? This is not made clear in the explanation of table 3.
The following clarification has been added to Table 3:
The esm- prefix indicates experiments are forced by CO2 emissions rather than CO2 concentrations.
Line 494-495: How will DAMIP support analysis of individual
forcings
in the context of an interactive carbon cycle? Will DAMIP run a coordinated set of experiments for emission-driven ESMs?
Only the FastTrack DAMIP experiments (concentration driven) are discussed here. We now refer to the DAMIP v2.0 paper (Gillett et al 2025
) where
additional DAMIP experiments are discussed.
Line 128 talks about the lack of infrastructure for a sustained approach. This is also true with respect to funding of modelling groups to realize such regular simulations. This should also be highlighted.
Agreed
, so highlighted
Lines 221-223 on high resolutions models contradicts itself. Please make clearer what you mean here.
This section has been substantially revised.
In section 2.3 I am surprised that emission-driven ESM (
scenarioMIP
) projections are not discussed more. This seems an important development on CMIP6.
We have added a sentence highlighting the new focus on CO2-emissions forced scenarios.
Lines 266 to 267: while modelling groups suggest that increase in fire over this century (Allen et al. 2024) seems to be an incomplete sentence.
We have revised the text accordingly.
For section 2.4 more discussion on potential MIP contributions to addressing this seems appropriate (e.g. TIPMIP, CDRMIP, C4MIP). I am also surprised there isn’t more mention of global warming overshoot scenarios in this section.
Agreed.  Details of various MIP contributions to the research questions, including overshoot scenarios are included in 3.4.5
Line 350: coupled carbon-climate ESMs importance in climate stabilization is mentioned. The importance in negative emission scenarios (warming overshoot) is likely even more important to mention.
Revised accordingly.
Lines 386-388: Will there be a coordinated effort to compare CMIP6 historical and scenario
forcings
to those in CMIP7? This would be a good thing to do (e.g. a forcing comparison MIP).
Comparison of CMIP6 historical forcings datasets with those of CMIP7 is underway as a
resh Eyes on CMIP
project. Comparison of model simulations driven by CMIP6 era versus
CMIP7 era forcings is
proposed in DAMIP v2.0 (Gillett et al, 2025).
Section 3.2. Will there be any stability/conservation requirements to meet for the piControl or
esm_piControl
runs?
We have revised this section
to include the C4MIP criterion of 10 PgC/century per component and Irving et al., 2021 result comparing ocean heat content drift in CMIP6 piControls
Lines 421 to 425: I don’t understand what is being proposed here. Please make it clearer.
If model X is used in a given science MIP, is it still an entry-card that model-X also runs the CMIP7 DECK? This is not clear
We have revised the text accordingly to clarify that the DECK remains “mandatory” for inclusion in ESGF.
Line 628: The REF is mentioned and somewhere else this is defined as a Rapid Evaluation Framework. What the REF is, and what it is intended for, needs to be more clearly explained.
We have added the following paragraph to the manuscript that explains the idea and structure of the Rapid Evaluation Framework (REF). We also added the reference for the REF that will be available in the GMD CMIP special issue as well.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC2
RC2
'Comment on egusphere-2024-3874'
, Chris Jones, 28 Jan 2025
The comment was uploaded in the form of a supplement:
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-RC2
AC1
'Reply on RC2'
, John Dunne, 12 Apr 2025
The authors deeply appreciate the reviewer's careful attention to the previous version of the manuscript and have incorporated there suggested changes throughout.
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
RC2: Chris Jones:
Review of CMIP7 documentation paper, by Dunne et al. Firstly to say that the CMIP panel and authors here are to be congratulated on the way they have approac
hed the task of developing CMIP7 plans in a complex landscape of requirements. CMIP has had a lot of success historically but requirements have grown and that growth is not sustainable so the new approach to consult with both users and providers and hence
prioritise
a more manageable, but still vital, set of simulations has been extremely welcome. The outreach, consulta
tion and dissemination of information has been excellent throughout and this paper contributes to that process. CMIP is a huge undertaking and changes the deployment of resource (both personal and computing/technology) in many, many modelling and research
centres
around the world. Careful design of what is requested and why is essential. I perform this review mainly in the context that the main aspects o
f CMIP7 and the Fasttrack, are already determined and too late to make substantial changes. Therefore, I focus on the presentation and explanation aspects with a few suggestions of things which could still be tweaked or clarified. My major comment is to as
k for more details on where the “Guiding Research Questions” came from? Are these the result of a consultation on the priority climate science questions? They resemble, but are not the same as, past WCRP grand challenges (e.g. on extremes or carbon cycle).
We have expanded the context for the
research questions. As we explain the questions w
ere developed by the CMIP panel as a way of making connections among experiments proposed during initial planning. They represent timely opportunities, based on new observations and evolving modeling capabilities, but do not constrain the research agenda.
They are specific to ESMs and hence narrower than e.g. WCRP Grand Challenges.
The way the paper is presented implies you started with these as a guiding set of questions and designed CMIP7 to answer them.
But in practice that wasn’t how I recall it happ
ening – so have these questions been retro-fitted to the experiments? E.g. line 132 says that CMIP7 design came from consultation and surveys – this is certainly true of the experiments – but did this consultation also take place for the science questions?
The questions
were
developed in parallel
to the
the fast track experimental design.  We have comprehensively rewritten the paragraph to better contextualize them
.  The fast track experiments were
proposed by the strategic ensemble design task team, in iterative consultation with the MIPs, stakeholders, and CMIP Panel
.  The questions, on the other hand, were developed within the CMIP Panel driven by our assessment of the
opportunities that recent developments and additional years of observations provide for enhanced
scientific
understanding
consistent with WCRP priorities as they apply specifically to modeling.
When I look over the CMIP7 web page there are lots of details and further links to the experiments, the task teams, the data request, the REF etc. Your figure 3 is r
eplicated on the website, which mentions the science questions linked to each FT experiment - but I cannot see the questions described or explained anywhere. It feels like these questions have been added after the experiment design. If these really are “gu
iding questions” that have guided, and are intended to keep guiding, CMIP I think they and their derivation need more prominence. It is not clear, for example, why you identify SST patterns over, say, cloud feedbacks, as a key driver of system sensitivity?
The text has been revised and context added, as described above.
Also, when you disc
uss a “carbon-water nexus” – is this just a catch-all for things not included in the other questions? The paragraphs of description of this question (sec 2.3) don’t appear to cover interactions between carbon and water cycles as implied by the “nexus” tag.
We have now clarified that
the reason for calling it a “water-carbon nexus” is that the CMIP Panel sees water and land carbon as the scope of a set of fundamentally linked problems.
So overall it would be good to articulate maybe how these priorities were arrived at. I am not
querying the importance of these questions – they are clearly crucial. But other aspects (for example on aerosol forcing and cloud processes) could also be seen as equally important, and CMIP7 will address many more than just these. Maybe it is better to
present the experiments first and then give some example high priority questions as examples of things which CMIP7 may help address – but it feels to be overselling the tag of “guiding questions” to imply that these came first and led to the CMIP7 design.
In addition to the responses above, we have renamed the questions from ‘guiding research questions’ to ‘
Fundamental
Research Questions
motivating Coupled Model intercomparison
’ to avoid confusion with how they were developed
Other suggestions I think are important: Model/simulation quality
. ii. Lines 374-375 –
it feels reasonable to suggest a degree of stability of a control run: +5ppm is probably OK – but better as a rate than an absolute – is this +-5ppm per century for example? In CMIP6 C4MIP requested drifts of less than 10 PgC per century in the main pools.
Agreed, we have revised this as was also requested by RC1.
But it would be consistent to also request stability criteria for other metrics – e.g. global T must drift by no more than +- XX degrees, or AMOC within XX Sv. It would be good to treat all major climate components similarly.
We have added the C4MIP guidance on carbon system equilibration, ocean heat content drift, and surface temperature.  Beyond
presenting these basic global metrics and requesting additional metrics be saved in the spin-ups (Appendix 1), the authors feel that adequate treatment of individual climate components is outside the scope of this paper.  We also have added context that t
he Rapid Evaluation Framework
will allow for a better assessment of different aspects of model performance and simulation for different potential end users and applications,
to
support more comprehensive
assessment of model performance by global mean temperature alone.
More importantly – I think it is unwise, however, to suggest arbitrary quality criteria for historical runs. Many ESMs may not hit the historical CO2 within 5ppm. See e.g.
Hajima
et al (https://egusphere.copernicus.org/preprints/2024/egusphere-2024-188/) for thorough evaluation of CMIP6 models in this respect. What happens if a model does not hit you 5ppm bounds – is it excluded from analysis?
We have changed this statement to “
As background, guidance is that modelling
centres
seek to
improve upon the
the historical CO
trend in their
esm-hist
relative to the CMIP6 ensemble which was found to be biased by -15 to +20 ppmv CO2 by 2014 (Gier et al., 2020) and has been the topic of much recent research  (e.g. Hajima et al., 2025)
We have also added the C4MIP
guidance on “stable” PI Control carbon budge
t, but I think it was just a target, not a “requi
ement” to put data on ESGF
, nor will any models be excluded from the ensemble if the criterion is not met
Users will decide whether to use the output in their analysis.
Again – as above, will you also specify acceptance criteria on other measures? – e.g. goodness of fit of the historical temperature record?
As now clarified per the above changes, the criteria are targets but not a “requirement” to put data on ESGF
, nor will any models be excluded from the ensemble if the criterion is not met
This would be a big change for CMIP – to specify acceptance criteria – I think it needs much more consultation before you introduce this.
We agr
ee with this concern and hope the above changes make it clear that CMIP7 will not exclude models based on performance assessment, but the Rapid Evaluation Framework aims to make it easier for end users to assess individual model fitness for various applica
tions
and regions
. This reflects consultations with developers and user groups in the SED task team which assessed that high level acceptance criteria or model subselection would not be appropriate for CMIP, due to the difficultly of defining all-purpose skill scores.
Ensembles – do you have any recommendations around generation of ensembles (from each model)? I
realise
you don’t want to rule out models by requiring large ensembles, but some experiments may benefit more than others from ensembles.
We agree with the reviewer and have added the following g
uidance “
While a
ny size of ensemble is acceptable to meet the mandatory DECK compliance for submission to ESGF, submission of multiple ensemble members of
historical
and/or
esm-hist
simulations are highly encouraged
as critical to a wide range of detection and attribution questions (see sections 2.1, 2.2, and 3.3). Large ensembles of the Atmospheric Model Intercomparison Project (AMIP) simulations forced by SST and Sea Ice Concentrations (SIC) are also encourag
ed.
Line 510 says that the FT “promotes the generation of ensembles” – but it is not clear how? FT does not appear to mention ensembles at all – but it could be a good opportunity to do so. It might be useful to provide guidance on this without mandating.
While t
he previous language was refe
ring to the CMIP ensemble, not
the
ensembling
of a single model, the point is well-taken, and w
e have chan
ged this to “
The A
ssessment
Fast Track experiment
s (Table 3)
were chosen as a practical balance among the number of participating models, and the complexity, resolution, and  number of ensemble members for each model
(Figure 1) to help distinguish the role of different processes and interactions and local versus remote drivers.
Likewise you could guide on choice of initial conditions (e.g. branch points best taken >XX years apart from the control run).
We
have added a section 3.4.6
on
the aspiration and
best practice
for initial condition ensemble generation on revision.  We agree with the reviewer that a
strategy which samples states of low frequency climate variability (such as 20 year intervals from esm-pictrl) is preferable to incremental perturbations to avoid
aliasing
internal variability in the pre-industrial ensemble mean.  We will also
highlight the importance of using a sufficiently spun-up control state when branching by recommending a desired maximum drift tendency
in section 3.2
As an example, quantifying TCRE from flat10 is a relatively large signal-to-noise activity. Ensembles may add little value to this. But quantifying ZEC from the flat10-zec simulation is a very small
signal
to
-noise and ensembles of this run could be really useful. See e.g.
Borowiak
et al (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024GL108654) which shows that ZEC derived from CMIP6 ZECMIP are subject to a level of uncertainty which CMIP6 did not consider due to lack of ensembles.
We have added clarification
that we are
referring
to multi-model ensembles to assess structural variability rather than sing
le
-model ensembles to assess internal variability should resolve this
.  We also agree with the reviewer that ZEC in particular (here assessed with esm-flat10-zec
), is a strong candidate for additional ensemble members for those centers who can afford it.  However, general practice for CMIP7 is that such decisions on ideal extended ensemble size are the responsibility of the corresponding MIP - in this case, C4MIP.
Spin-up. I’m not sure I understand the request to submit numerical r
esults from the spin-up of the models. What is the goal of this – how will they be used? “for curation” sounds like an odd phrase – why do these need curating? And what does “curation” involve – is this the same as archiving on a public database like ESGF?
We agree that we need to provide a better justification
than “curation” and have changed this to “publi
c dissemination”– indeed, our hope was that a “Fresh Eyes” team would perform an analysis of this dataset and that it would be in general useful information to researchers doing analysis on the potential role of spinup as a form of “structural uncertainty”
and “internal variability
Model selection. I think you are very wise not to do any prior screening or selection of models. The “hot models” paper you cite in Appendix 3 by
Hausfather
et al is rather simplistic to provide a table of “Y” and “N” on model screening based on sensi
tivity. A more nuanced analysis by Swaminathan et al (https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2024EF004901) shows clearly that many metrics of crucial interest are not related to ECS. Many high sensitivity models have very good evaluation
scores on many metrics and vice versa – having a lower ECS i
s certainly not a measure of quality. Any screening or selection needs to be much better understood and carried out case-by-case for the application in question. It cannot (yet) be done at the scale of CMIP which has so many downstream uses of the outputs.
We have moved the entirety of the model sub-selection section to Appendix 3 and added reference to the Swaminathan analysis.
Minor comments
Lines 102-107. This is a nice description of how CMIP has expanded and refined focus as both the expertise and need evolve
s. It feels that more knowledge of reversibility and symmetry is a big gap in our understanding of the climate system, and here could be a good place to articulate the need for more process exploration of how the system behaves under reversing of forcing.
We have added that the projections include “
a range of increasing and recovery trajectories
Line 216 says that CMIP7 focus on emissions-driven runs allows for more exploration of extremes under
stabilisation
– can you explain how so?
We have clarifi
ed that “
The increasing proportion of models driven by emissions rather than concentrations
will allow for novel investigation of extremes under climat
e stabilization
due to the demonstrated rigor of Transient Response to Cumulative CO2 Emissions (TCRE; Matthews et al., 2009) and climate stability under zero emissions (MacDougall et al., 2020)
Sec 2.4 on points of no return – is there a reason not to call this either “tipping points” or “irreversibility” which have become much more common phrases for
these topics. Wood et al (2023 - https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2022EF003369) is a good reference here for the framing of high impact/low likelihood outcomes and the need for research spanning different dimensions of this topic.
We have changed “Points of No Return/Ratching” to “Tipping Points” and added reference to Wood et al., 2023.
Line 297 onwards – describing the CMIP7 DECK intent. It is worth being explicit here that the goal is only to
characterise
the response to _increasing_ forcing. It was a deliberate decision not to add a DECK experiment to
characterise
the system response to reducing forcing. (This remains a gap in CMIP7 – noting that flat10-cdr can only be performed by ESMs)
Discussion of priorities for zero and negative emissions forcings experiments is included in the description of the “Assessment Fast Track”
Table 1 is important. A couple of notes/suggestions - For esm-piControl the forcing is described as “emissions” - I wonder if this should be better descri
bed as “interactive CO2” or “simulated CO2” because of course there are no emissions. So even though we informally describe this as “emissions mode” it risks implying that there are some emissions being applied. Or at least specify that CO2 emissions are z
ero. -
We have clarified that we have “
expanded protocol to facilitate participation with ESMs
that close the carbon budget and are
capable of running with interactive CO
forced by emissions
(including positive, zero, and negative scenarios) in addition to prescribed concentrations
and added “zero emissions” to Table 1.
Typo – looks like the 1% and historical lines have transposed the solar/volcanic forcing entries
Fixed
Line 355. Can you clarify the need for 100 years of control run before any experiments are branched off? I don’t recall this being requested in CMIP6
We have add
ed “
One change in CMIP7 is the explicit recommendation for modeling centers to provide at least 100 years of their
piControl and/or esm-ctrl
before the corresponding branching points for 1pct, 4xAbrupt and historical perturbations to allow users to better characterize drift.
Line 364 – can you explain why conc-driven control run is required if the esm-control is stable? That seems redundant
We have changed this guidan
ce to “
Note that a
piControl
simulation forced by the same CO2 concentration is also encouraged to account for any carbon-climate coupling differences between esm-poControl.
The concern here is not only that the esm-piControl might not be stable, but that it may have a fundamentally different vegetation state than would be in the piControl
depending on the treatment of canopy CO2 under the diurnal cycle and regional variability.
Table 2 is useful – but it feels odd to name individuals. What happens as/when a person moves job
etc
? maybe a named group in an
organisation
is more useful.
These were placeholders in the previous version and have been replaced with citable references.
Table 2, N deposition. Will this be speciated into dry/wet and
oxidised
/reduced reactive nitrogen?
This level of specificity cannot be answered at this time as it remains a placeholder until the dataset is provided.
Line 405. The section on spin-up – it is not clear how the strap line “
characterising
model diversity” is relevant to this sub-section. Maybe just call the section “ocean and land spin-up” (where land here includes land ice/cryosphere?)
Removed
Line 470 – is “SCP” a typo? “SSP”?
Yes, fixed
Table 3 is super useful and important – it will be a very good easy-look-up of the whole set of FT s
imulations. But it is really big! It is important that it is produced and typeset to be easily readable given how big it is. I feel this comment may be more for the journal/typesetters than the authors – I hope you can find a way to make it well readable.
Agreed
Table 3 – scenario time period. You quote that scenarios run to 2100 – is this decided? I thought it would be 2125, or at least this was still being di
scussed. (personal opinion – it drives me mad that IPCC figures and values can only ever quote a climate – i.e. 20-year average – for 2090. So an extension to a minimum of 2110 seems vital so that we can actually quote a 2100 value for projected results!)
We have clarified with ScenarioMIP that the formal IAM “Realistic Scenarios” are driven by population and Gross Domestic Product data that only extends to 2100.  However, all such “Scenarios” will continue past 2100 as more idealized
“Extensions” to at least 2150 and in some cases beyond to 2500
Appendix 1 – requested spin-up metrics. As per my comment above I’m not yet convinced why you need to request these. But if you do, then to close the land carbon cycle you should also requested
cProduct
. Even if the control run has no land-use _change_ it will still have land use, and the product pools may well be non-zero.
cLand
is then the sum of
cVeg+cLitter+cSoil+cProduct
Added to Appendix 1
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC1
CC1
'Comment on egusphere-2024-3874'
, Mark Zelinka, 28 Feb 2025
Please see attachment.
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC1
AC3
'Reply on CC1'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated most of the suggested changes
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC1, Mark Zelinka:
Review of “An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment” by Dunne et al [egusphere-2024-3874]
Summary The authors motivate and describe the seventh iteration of CMIP, including the new Fast Track set of experimen
ts which serves the IPCC. The paper is mostly effective in achieving these goals, but there are a few areas needing improvement. This review largely deals with issues relevant to the Cloud Feedback Model Intercomparison Project (CFMIP). Mark Zelinka Maria
Rugenstein
Alejandro
Bodas
-Salcedo Jennifer Kay Paulo
Ceppi
Mark Webb on behalf of the CFMIP Scientific Steering Committee
Major Comments
Section 2.1 describes the first of four guiding questions in
CMIP7, dealing with pattern effects. A large part of the reason the scientific community is interested in pattern effects is because of the science conducted by members of the CFMIP community (Andrews et al. 2015; Zhou et al. 2016; Andrews and Webb 2017;
Ceppi
and Gregory 2017; Andrews et al. 2018, 2022), facilitated by CFMIP experiments like
amip-piForcing
(Andrews 2014; Webb et al. 2017), and illuminated by CFMIP diagnostic
s (including satellite cloud simulator diagnostics that reveal the diverse cloud responses to warming patterns). The “Why expect progress now?” section completely excludes a role for CFMIP while instead mentioning the roles that can be played by DAMIP and
AerChemMIP
. The focus here seems to be more on what causes warming patterns (a worthy goal), but the understanding of the climate response (i
ncluding but not limited to clouds) to diverse warming patterns is essential to this problem and should not be neglected. Moreover, the surface temperature response pattern is likely to be at least partly affected by how clouds and their radiative effects
feed back
on warming patterns (Myers et al. 2017;
Erfani
and Burls 2019;
Rugenstein
et al. 2023; Espinosa and Zelinka 2024;
Breul
et al. 2025) and are involved in teleconnections that propagate surface temperature anomalies from high to low latitudes (Kang et al. 2023; Hsiao et al. 2022). We suggest better acknowledging CFMI
P contributions to the current understanding of the pattern effect and explicitly calling out the role that CFMIP can play in making progress. We also note that the first sentence of this paragraph is rather hard to parse and is formulated rather weakly (“
xyz
may all help” – it remains unclear with what and how).
We have revised the discussion of all four questions to be more brief and pointed. In formulating question 1 we were anxious to co
mmunicate that the SST pattern problem was more expansive than cloud feedbacks but took that thinking too far. The revised text seeks to provide a more balanced and compact discussion of connections between ocean temperatures, clouds, and other processes.
The discussion of opportunities ha
s likewise been sharpened to focus on general ideas rather than contributions from individual MIPs. We do want to emphasize new opportunities, precluding a discussion of previously-performed experiments and diagnostics no matter how valuable they’ve been.
CFMIP requests that the abrupt CO2 experiments (4x, 2x, and 0.5x) be run out to a minimum of 300 years, and we strongly encourage modeling groups to run beyond that (which could be noted at L331). Note that CFMIP req
uested this minimum duration as part of the FastTrack consultation process, which was then adapted into the request for the abrupt CO2 experiments. (See the abrupt-4xCO2 request: https://airtable.com/embed/appVPW6XAZfbOZjYM/shrqq9I4NJThwOT9W/tblkc1lkKEtiY
Kcho
/viw9PLlrOnfUMcvHw/recl01t59HM8jz8ax.) Table 1 currently lists the abrupt-4xCO2 run as extending for “150+ (300)”, though it is not clear what this nomenclature means exactly.  We request that “150+” be replaced with “300+” to make it clear that 30
0 years is the desired minimum, and “(300)” be replaced with “(1000)”.
We have adopted this suggestion.
The reasons for requesting that the abrupt CO2 runs be integrated for a minimum of 300 years with strong encouragement to extend beyond that are manifold:  ○ Better ECS quantification:
Rugenstein
and
Armour
(2021) quantified with 10 equilibra
ted CMIP5 and CMIP6 models that 400 years are necessary to estimate the true equilibrium climate sensitivity within 5% error. The model spread in equilibration is large and CMIP6/7 models probably need longer to equilibrate due to the "hot model problem" (
Hausfather
et al. 2022), which partly consists of temperature- and time-dependent feedbacks. Kay et al (2024) estimated an equilibrium timescale of 200+ years for 2xCO2 and 500+ years for 0.5xCO2, noting important implications for paleo cold cl
imate constraints (e.g., LGM) that can only be understood if the simulations are long enough. ○ Understanding centennial coupled behavior: Simulations of at least 300 years are necessary for estimating the pattern effect, ocean heat uptake and convection (
Gjermundsen
et al. 2021), AMOC recovery (
Bonan
et al. 2022), and Equatorial Pacific response timescales (
Heede
et al. 2020). ○ Understanding and quantifying feedback temperature dependence: This is not well understood, could lead to tipping points and is, after the pattern effect and cloud feedbacks, the biggest unknown in estimating ECS, understanding
hot models, and high-risk futures (Bloch-Johnson et al. 2021). It is very hard to quantify because it is obscured by the pattern effect, but is aided by longer simulations.  ○ Practical considerations: Running existing simulations for longer is typically e
asier than running new simulations. Thus, if computing time is available at modeling centers, it is strongly encouraged that pre-industrial control and abrupt CO2 runs be extended as long as possible. Anecdotally, many of the model centers contributing to
LongRunMIP
Rugenstein
et al. 2019) had independently run their simulations for longer than 150 years and had the data sitting around, suggesting that in many cases such long simulations are already being performed or are trivial to extend. Currently, ~52 groups are using the
LongRunMIP
simulations for studies on internal variability, global warming levels, feedback quantification, paleo climate, oceanography, and training for data-driven machine learning
approaches
We have made the change to request 300 years.  Discussion of LongrunMIP, however, is outside the scope of the present work.
Minor Comments
L34: Should it be “...include experiments to diagnose historical…”?
We have r
ephrase from “
include historical, effective radiative forcing, and focus on
CO
-emissions-driven experiments
” to
“...
evaluate historical changes and effective radiative forcing”
Introduction section: This section may be too long. The main audience of this paper is the science community that want to understand the rationale and details of the experimental design, not the history of CMIP iterations.
The introduction has been shortened and, we hope , sharpened.
L90: should be Zelinka et al 2020
Corrected
L125-127: Suggest being more specific and use “modeling community”, rather than “research community” as a whole. The research community benefits as a whole, but it doesn't share the burden.
Adopted
L130” “... the present experimental design includes some components …” This point is hard to parse.
This section is modestly rewritten for clarity
The entire paragraph reads well though, but the role DECK plays in climate services might need more
highlighting. The remainder of the paper is phrased mostly in terms of science questions and the role climate service plays in there remains somewhat unclear.
L140: Would it be worth listing a few big questions which were answered mainly or only through past CMIP cycles?
We have actually done the opposite in removing much of the introduction motivation to reduce the length in response to the reviewers but cite Durack et al., 2025 for CMIP history
L265-266: something wrong with the phrasing here
Changed to “Tipping Points”
Table 1: It's unclear why the request is for a small ensemble for historical and a large ensemble for
amip
We have added a new section to give more explicit guidance on ensembles (3.4.6).
Section 3.1.2: It would be helpful to see a plot of how the new forcing datasets differ from those used in CMIP6 during the 1850-2014 period.
Forcings will be the subject of their own set of publications.
L310/Fig.2: This schematic might benefit from a vertical time axis. The current version leaves a lot of room for interpretation. What are the small orange arrows? What is the connection between DECK and AR7 Fast Track?
The figure has been revised
L355: “year 100 or later of piControl” – is the rationale for this given anywhere in the manuscript?
Explained as similarly requested by RC2.
L383: The historical and AMIP simulations end in 2021 according to Table 1.
Corrected
L498: CFMIP deals with cloud and non-cloud feedbacks (all radiative feedbacks)
orrected
L501: Figure 3 excludes RFMIP from the “Characterization” box, yet it is highlighted in this Characterization section, which is confusing.
orrected
L510-511: Very hard to parse this statement
Clarified
L516: “Forcing” should be “Feedback”
Corrected
L517: I believe you mean “CFMIP” rather than (or in addition to) “CMIP” here
Corrected
L541: Missing section number
Text has been moved to figure caption
Table 3, amip-p4K: missing word here? “feedbacks observed”
Column removed for space considerations.
Table 3, amip-p4K: the number of years should be 44 (1979 - 2022)
Corrected to 43 (1979-2021)
Table 3,
amip-piForcing
: the number of years should be 153 (1870 - 2022)
Corrected to 152 (1870-2021)
L638: 4 should be 3
Corrected
Appendix 1 table: Suggest specifying top of atmosphere albedo when referencing
rsdt
and
rsut
Added
L712-713: Might be some missing words here
Revised language
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC3
CC2
'Comment on egusphere-2024-3874'
, Cath Senior, 28 Feb 2025
The comment was uploaded in the form of a supplement:
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC2
AC4
'Reply on CC2'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated both reference to the many national assessments CMIP has supported as a webpage hosted by the IPO and changed the label of the "Assessment Fast Tract" to align with this more general utility
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC2, Cath Senior:
Comment on Dunne et al 2025: ‘An evolving Coupled Model Intercom
parison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment’  This is a very timely and important paper that lays out the evolution of the CMIP project and details the plans for its next phase, CMIP7. I have a couple of comments;
An important part of the design of CMIP7 that differs from earlier phases is the separation of policy relevant simulations (the Fast-Track) from the research orientated simulations designed to address the scientific questions and provide a rich
characterisation
of climate model capability to support future development. I feel the thinking behind this n
ew development could be made more explicit. In particular, how this came about - at least in part- from the feedback from modelling groups about the burden of CMIP6 simulations. Engagement and support for modelling groups contributing to CMIP7 will be crit
ical and documenting more clearly the influence they had on the design of CMIP7 will give reassurance to the community that they can achieve a balance between delivering to their national agendas as well as engagement in international community science.
The motivation section has been comprehensively revised.
There are numerous references to the critical role that CMIP has played in underpinning the IPCC assessments. This is absolutely right and an important point to be made. I also think the authors have tried to carefully lay out that CMIP has - and will c
ontinue to support the national and international science communities. However what is perhaps missing is a third important role that the policy relevant simulations have played in supporting the national assessments of many countries. A quick question to
ChatGPT
(!) gives the following 12 countries/communities that have used CMIP scenarios to deliver their national assessments. It would be good to document this important role
emphasising
the support CMIP plays for national agendas.  a. United States • National Climate Assessment (NCA) • Led by the U.S. Global Change Research Program (USGCRP) • Uses CMIP5 and CMIP6 p
rojections for national and regional climate impact assessments. • Latest report: Fifth National Climate Assessment (NCA5, 2023) b. United Kingdom • UK Climate Projections (UKCP) • Developed by the Met Office Hadley Centre • Uses CMIP5 (UKCP18) and CMIP6 (
UKCPNext
) to provide probabilistic and
highresolution
UK-specific projections. c. European Union • Eur
opean Climate Risk Assessment (EUCRA) • Managed by Copernicus Climate Change Service (C3S) and European Environment Agency (EEA) • Uses CMIP6 projections within EURO-CORDEX for downscaled regional assessments. d. Canada • Canada’s Climate Change Report (CC
CR) • Produced by Environment and Climate Change Canada (ECCC) • Uses CMIP5 and CMIP6 for projections at the national level. e. Australia • State of the Climate Report (by CSIRO & Bureau of Meteorology) • Climate Change in Australia Projections • Uses CMIP
5 and CMIP6, downscaled for Australian conditions. f. Germany • GERICS Climate Fact Sheets (by the Climate Service Center Germany) • German Climate Change Assessment Report • Uses CMIP6 projections, often combined with EURO-CORDEX downscaling. g. France •
Drias
Future Climate Scenarios (by
Météo
-France) • GREC (Regional Climate Group) Reports • Uses CMIP5 and CMIP6, combined with CNRM-CM models and EURO-CORDEX.
h. China • China’s Third National Climate Change Assessment Report • Uses CMIP5 and CMIP6 within China’s regional modeling framework (BNU-ESM, FGOALS).
. Japan • Climate Change in Japan Report (by Japan Meteorological Agency, JMA) • Uses CMIP6 and the JRA-55 reanalysis dataset. j. New Zealand • NIWA Clim
ate Change Projections • Uses CMIP5 and CMIP6, often with regional downscaling via VCSN (Virtual Climate Station Network). k. South Africa • South African Risk and Vulnerability Atlas (SARVA) • Uses CMIP5 and CORDEX-Africa for regional climate projections.
In acknowledgment of the broad utility of this effort for assessment beyond just the IPCC, we have changed “AR7 Fast Track” to “Assessment Fast Track” and the IPO can set up a web page on the use of CMIP in National Assessments
at
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC4
CC3
'Comment on egusphere-2024-3874'
, Annalisa Cherchi, 01 Mar 2025
Broad and comprehensive article to describe forthcoming CMIP7 effort. Some comments below:
- among the challenging questions, section 2.3 about the water-carbon-climate nexus does not fully exploit water and the importance of the hydrology processes. We know there are still weaknesses and limitation in this (i.e. Douville et al 2021 in last IPCC AR6 and beyond) but there are now more efforts in modelling centres in this direction;
- Fig 1: the term multiverse seems not fully appropriate as what is shown need and depends on coupling and feedbacks between components and processes. Even here the hydrology part is not fully exploited/described. For example, monsoons are missing among the phenomena. The land interaction is expressed mostly in terms of vegetation and carbon cycle but land is also interaction with the atmosphere via moisture and heat exchanges. In the caption of the figure, red and blue are mentioned as colors for atmosphere and ocean, what about land and cryosphere for example? Also related to this, in lines 50-53  model development need to consider and properly represent the coupling between the new components, cryosphere but also improved land-hydrology
- In term of outline of the paper, the key points highlighted in the abstract (lines 33-38) are not fully exploited within the text, either in terms of sectioning and mostly in the summary. In addition the summary (section 5) is not a real summary but mostly contains points of discussion and also new features of this CMIP cycle not described in the sections before, ie. Fresh Eyes on CMIP. Also the concept of emulators would deserve a bit more of clarification/explanation. Eventually these new aspects could be more extensively described in this manuscript, leaving some details of the experiments to forthcoming papers. For example, there are references to details of ScenarioMIP that is not published yet. There is probably no need for those details at this stage as they will described and explained in details once the reference papers will be ready. A description (outline) of the content of the manuscript could be useful at the end of the Introduction.
- Overall there are some repetitions (mostly of concept) that could be avoided to simplify the reading (for example, lines 60-76 contains repetitions in the two paragraph and the text could be rewritten and lightened), there are some typos in section 5 (section numbering).
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-CC3
AC5
'Reply on CC3'
, John Dunne, 12 Apr 2025
The authors deeply appreciate these perspectives on the previous version of the manuscript and have incorporated most of the suggested changes
Original reviewer comments in italics. Author
re
sponse to reviewers are provided inline in Bold
CC3, Annalisa
Cherchi
Broad and comprehensive article to describe forthcoming CMIP7 effort. Some comments below:
- among the challenging questions, section 2.3 about the wate
r-carbon-climate nexus does not fully exploit water and the importance of the hydrology processes. We know there are still weaknesses and limitation in this (i.e. Douville et al 2021 in last IPCC AR6 and beyond) but there are now more efforts in modelling
centres
in this direction;
Similar to comment by RC2, we have tried to make this link more explicit.
- Fig 1: the term multiverse seems not fully appropriate
as what is shown need and depends on coupling and feedbacks between components and processes. Even here the hydrology part is not fully exploited/described. For example, monsoons are missing among the phenomena. The land interaction is expressed mostly in
terms of vegetation and carbon cycle but land is also interaction with the atmosphere via moisture and heat exchanges. In the caption of the figure, red and blue are mentioned as colors for atmosphere and ocean, what about land and cryosphere for example?
We have revised the Figure.
Also related to this, in lines 50-53  model development need to consider and properly represent the coupling between the new components, cryosphere but also improved land-hydrology
We have added mention of cryosphere and land-hydrology interactions as key efforts in improving model comprehensiveness.
- In term of outline of the paper, the key points highlighted in the abstract (lines 33-38) are not fully exploited within the text, either in terms of sectioning and mostly in the summary.
The abstract has been revised.
In addition the summary (section 5) is not a real summary but mostly contains points of discussion and also new features of this CMIP cycle not described in the sections before,
ie
. Fresh Eyes on CMIP.
The comment on Fresh Eyes has been moved to the earlier sections.
Also the concept of emulators would deserve a bit more of clarification/explanation. Eventually these new aspects could be more extensively described in this manuscript, leaving some details of the experiments to forthcoming papers.
We have added a statement on emulators to the introduction
and reframed this section.
For example, there are references to details of
ScenarioMIP
that is not published yet. There is probably no need for those details at this stage as they will described and explained in details once the reference papers will be ready.
Now that the
ScenarioMIP
manuscript has been released, we have gone back and
made
sure this description is in alignment and not redundant or conflicting.
A description (outline) of the content of the manuscript could be useful at the end of the Introduction.
We end the introduction with a sentence describing the following sections
- Overall there a
re some repetitions (mostly of concept) that could be avoided to simplify the reading (for
example
, lines 60-76 contains repetitions in the two paragraph and the text could be rewritten and lightened), the
re are some typos in section 5 (section numbering).
We have deleted these repetitions.
[L60 delete:
As an international research activity within WCRP,
Corrected
Line73 delete:
As a publicly available ensemble including state-of-the-art coupled model contributions from centers around the globe,
Corrected
L557-
563 could be deleted].
Deleted
Citation
: https://doi.org/
10.5194/egusphere-2024-3874-AC5
Peer review completion
AR
– Author's response |
RR
– Referee report |
ED
– Editor decision |
EF
– Editorial file upload
AR
by John Dunne on behalf of the Authors (09 May 2025)
Author's response
Author's tracked changes
Manuscript
ED:
Referee Nomination & Report Request started (13 May 2025) by Lele Shu
RR
by Anonymous Referee #1 (02 Jun 2025)
RR
by Anonymous Referee #3 (05 Jun 2025)
Suggestions for revision or reasons for rejection
The revised manuscript offers a comprehensive blueprint for CMIP7, detailing the expanded DECK, the Assessment Fast Track (AFT) suite, and the rationale behind the new emissions-driven and process-oriented experiments. Relative to the first revision, the paper is notably clearer, better structured, and more tightly linked to the four flagship science questions. These improvements make the manuscript substantially more readable and impactful. However, the interaction between carbon and water cycles remains insufficiently addressed in the revised Section 2.3. The manuscript also still contains overly long sentences and occasional grammatical errors, which at times impede readability. Further proofreading is recommended. I suggest a minor revision focused on strengthening Section 2.3 and improving the clarity of the writing. A number of specific comments are provided below for the authors’ consideration.
• Lines 64–70: “The historical publicly availability of CMIP ensembles have…” → “has.” Also, the sentence is too long. Suggest splitting: “…in house. This accessibility has advanced…”
• Line 95: “Unfortunately. the necessary ESM capabilities…” → “Unfortunately, the necessary…”
• Lines 119–124: Break into three sentences for clarity:
“The paper then provides guidance on protocols for the mandatory Diagnostics, Evaluation, and Characterization of Klima (DECK) experiments and the recommended Assessment Fast Track experiments (Section 3). It distinguishes between experiments with a stronger emphasis on assessment and service-oriented prediction and projection, and those aimed at process understanding through characterization and attribution. The paper concludes with a discussion of CMIP’s evolving role in the research community.”
• Lines 180–182: Consider revising for improved clarity as follows: “State-of-the-art coupled carbon cycle–climate modeling lies at the intersection of climate science, ecosystems, hydrology, biogeochemistry, and socioeconomic systems. The future resilience of natural systems and human-modulated carbon sinks remains one of the key uncertainties in efforts toward climate stabilization and warming reversal.”
• Lines 213–216: Would this be better:
“Forest dieback and demographic shifts, for example, depend heavily on drought risk and related thermal and hydrological stressors (Drijfhout et al., 2015). This makes the representation of climate–vegetation interactions critical for robust assessments of potential change, especially as resilience may already be declining in the Amazon (Boulton et al., 2022).”
• Lines 284–285: “recommend extend the simulation out to 300 years” → “recommend extending the simulation to 300 years”
• Line 310: Suggest breaking into two sentences: “…(if applicable). In other words,…”
• Lines 328–331: Consider revising to:
“As background, modeling centers are advised to improve the historical CO₂ trend in their esm-hist simulations, addressing biases observed in the CMIP6 ensemble, which ranged from –15 to +20 ppm by 2014 (Gier et al., 2020). The causes of these biases and strategies for reconciling model output with observations have been the focus of extensive recent research (e.g., Hajima et al., 2025).”
• Lines 544–546: Consider revising to:
“Thematic diagnostic groups and sustained-mode initiatives are also being established, with teams focusing on the CMIP carbon footprint, controlled vocabularies, and quality control/assurance.”
Hide
ED:
Publish subject to minor revisions (review by editor) (06 Jun 2025) by Lele Shu
Please reply to the reviewer's comments in round 2, then we will make decision for your manuscript.
Hide
AR
by John Dunne on behalf of the Authors (20 Jun 2025)
Author's response
Manuscript
EF
by Mario Ebel (23 Jun 2025)
Author's tracked changes
ED:
Publish as is (02 Jul 2025) by Lele Shu
AR
by John Dunne on behalf of the Authors (14 Jul 2025)
Manuscript
Share
Journal article(s) based on this preprint
01 Oct 2025
| Highlight paper
An evolving Coupled Model Intercomparison Project phase 7 (CMIP7) and Fast Track in support of future climate assessment
John P. Dunne, Helene T. Hewitt, Julie M. Arblaster, Frédéric Bonou, Olivier Boucher, Tereza Cavazos, Beth Dingley, Paul J. Durack, Birgit Hassler, Martin Juckes, Tomoki Miyakawa, Matt Mizielinski, Vaishali Naik, Zebedee Nicholls, Eleanor O'Rourke, Robert Pincus, Benjamin M. Sanderson, Isla R. Simpson, and Karl E. Taylor
Geosci. Model Dev., 18, 6671–6700,
2025
Short summary
Editorial statement
Short summary
The seventh phase of the Coupled Model Intercomparison Project (CMIP7) coordinates efforts to answer key and timely climate science questions and facilitate delivery of relevant multi-model simulations for prediction and projection; characterization, attribution, and process understanding; and vulnerability, impact, and adaptation analysis. Key to the CMIP7 design are the mandatory Diagnostic, Evaluation and Characterization of Klima and optional Assessment Fast Track experiments.
Hide
Editorial statement
The Coupled Model Intercomparison Project lies at the core of global climate prediction. This paper details the Coupled Model Intercomparison Project phase 7 (CMIP7) and its Fast Track initiative. By transitioning into a continuous climate modeling program with enhanced coordination and federated planning, CMIP7 aims to address key climate questions more effectively. The expansion of the Diagnostic, Evaluation, and Characterization of Klima (DECK) experiments—including the addition of historical simulations, effective radiative forcing assessments, and CO₂-emissions-driven experiments—strengthens the foundation for climate model evaluation and projection. Additionally, the AR7 Fast Track ensures timely delivery of critical climate simulation data to support the upcoming 7th Intergovernmental Panel on Climate Change Assessment Report. This paper highlights how these advancements in experimental protocols and infrastructure support not only scientific understanding but also inform policy-making and climate services, ultimately contributing to global efforts in climate adaptation and mitigation.
Hide
John Patrick Dunne
Helene T. Hewitt
Julie Arblaster
Frédéric Bonou
Olivier Boucher
Tereza Cavazos
Paul J. Durack
Birgit Hassler
Martin Juckes
Tomoki Miyakawa
Matthew Mizielinski
Vaishali Naik
Zebedee Nicholls
Eleanor O’Rourke
Robert Pincus
Benjamin M. Sanderson
Isla R. Simpson
and
Karl E. Taylor
John Patrick Dunne
Helene T. Hewitt
Julie Arblaster
Frédéric Bonou
Olivier Boucher
Tereza Cavazos
Paul J. Durack
Birgit Hassler
Martin Juckes
Tomoki Miyakawa
Matthew Mizielinski
Vaishali Naik
Zebedee Nicholls
Eleanor O’Rourke
Robert Pincus
Benjamin M. Sanderson
Isla R. Simpson
and
Karl E. Taylor
Viewed
Total article views: 5,252
(including HTML, PDF, and XML)
HTML
PDF
XML
Total
BibTeX
EndNote
4,485
720
47
5,252
31
46
HTML: 4,485
PDF: 720
XML: 47
Total: 5,252
BibTeX: 31
EndNote: 46
Views and downloads
(calculated since 20 Dec 2024)
Cumulative views and downloads
(calculated since 20 Dec 2024)
Viewed (geographical distribution)
Total article views: 5,009
(including HTML, PDF, and XML)
Thereof 5,009 with geography defined
and 0 with unknown origin.
Country
Views
Total:
HTML:
PDF:
XML:
Latest update: 22 Apr 2026
John Patrick Dunne
CORRESPONDING AUTHOR
john.dunne@noaa.gov
NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, USA
Helene T. Hewitt
Met Office Hadley Centre, Exeter, UK
Julie Arblaster
School of Earth, Atmosphere and Environment, Monash University, Australia
Frédéric Bonou
Laboratory of Physics and Applications (LPA), National University of Sciences, Technology, Engineering and Mathematics of Abomey (UNSTIM), Benin
Olivier Boucher
Institut Pierre-Simon Laplace, Sorbonne Université / CNRS, Paris, France
Tereza Cavazos
Center for Scientific Research and Higher Education of Ensenada (CICESE), Baja California, Mexico
Paul J. Durack
PCMDI, Lawrence Livermore National Laboratory, Livermore, CA, USA
Birgit Hassler
Deutsches Zentrum für Luft- und Raumfahrt (DLR), Institut für Physik der Atmosphäre, Oberpfaffenhofen, Germany
Martin Juckes
University of Oxford, and UKRI STFC, UK
Tomoki Miyakawa
Atmosphere and Ocean Research Institute, The University of Tokyo, Kashiwa, Japan
Matthew Mizielinski
Met Office Hadley Centre, Exeter, UK
Vaishali Naik
NOAA/Geophysical Fluid Dynamics Laboratory, Princeton, USA
Zebedee Nicholls
Climate Resource, Berlin, Germany
Energy, Climate and Environment Program, International Institute for Applied Systems Analysis (IIASA), 2361 Laxenburg, Austria
School of Geography, Earth and Atmospheric Sciences, The University of Melbourne, Melbourne, Victoria, Australia
Eleanor O’Rourke
CMIP International Project Office, ECSAT, Harwell Science & Innovation Campus, UK
Robert Pincus
Lamont-Doherty Earth Observatory, Columbia University, Palisades NY USA
Benjamin M. Sanderson
CICERO, Oslo, Norway
Isla R. Simpson
NSF National Center for Atmospheric Research, Boulder, Colorado, USA
Karl E. Taylor
PCMDI, Lawrence Livermore National Laboratory, Livermore, CA, USA
The requested preprint has a corresponding peer-reviewed final revised paper. You are
encouraged
to refer to the final revised version.
Preprint
(1328 KB)
Metadata XML
BibTeX
EndNote
Final revised paper
Short summary
This manuscript provides the motivation and experimental design for the seventh phase of the Coupled Model Intercomparison Project (CMIP7) to coordinate community based efforts to answer key and timely climate science questions and facilitate delivery of relevant multi-model simulations for: prediction and projection, characterization, attribution and process understanding; vulnerability, impacts and adaptations analysis; national and international climate assessments; and society at large.
This manuscript provides the motivation and experimental design for the seventh phase of the...
Share