Skip to main content

Evidence for recency of practice standards for regulated health practitioners in Australia: a systematic review



Health practitioner regulators throughout the world use registration standards to define the requirements health practitioners need to meet for registration. These standards commonly include recency of practice (ROP) standards designed to ensure that registrants have sufficient recent practice in the scope in which they intend to work to practise safely. As the ROP registration standards for most National Boards are currently under review, it is timely that an appraisal of current evidence be carried out.


A systematic review was conducted using databases (including MEDLINE, EMBASE, PsycInfo, and CINAHL), search engines, and a review of grey literature published between 2015 and April 2022. Publications included in the review were assessed against the relevant CASP checklist for quantitative studies and the Joanna Briggs Institute checklist for analytical cross-sectional studies.


The search yielded 65 abstracts of which 12 full-text articles met the inclusion criteria. Factors that appear to influence skills retention include the length of time away from practice, level of previous professional experience and age, as well as the complexity of the intervention. The review was unable to find a clear consensus on the period of elapsed time after which a competency assessment should be completed.


Factors that need to be taken into consideration in developing ROP standards include length of time away from practice, previous experience, age and the complexity of the intervention, however, there is a need for further research in this area.

Peer Review reports


Health practitioner regulators in Australia and other jurisdictions define the requirements that applicants and registrants need to meet to become or stay registered. These standards are an important part of the regulatory framework for each profession and commonly include standards for primary education in the profession, continuing professional development (CPD), and recency of practice (ROP).

In 2010, Australia introduced the National Registration and Accreditation Scheme (the National Scheme) which regulates 16 health professions under the Health Practitioner Regulation National Law (the National Law). The National Law requires that National Boards must develop, consult on, and recommend certain registration standards to the Ministerial Council. ROP requirements aim to ensure that a health practitioner has sufficient recent practice in the scope in which they intend to work and that they have maintained an adequate connection with their profession to ensure they can practise competently and safely [1]. Health practitioners can become clinically inactive for a range of reasons including caring for family members, career dissatisfaction, health-related absences, the pursuit of other careers (including leadership and academic roles) [2] and the participation in approved research training or other educational opportunities [3]. In some cases, this may take the form of a career break, whereas other practitioners may be clinically active but are doing a low volume of work or be seeking to change their scope of practice.

The core registration standards are generally reviewed by National Boards every five years in line with good regulatory practice. Previous reviews of ROP standards were underpinned by two unpublished systematic reviews designed for internal use. These were a commissioned systematic review conducted by Professor Elizabeth Farmer in 2012 and an update of that review by the Australian Health Practitioner Regulation Agency (Ahpra) in 2015. Both concluded that ROP has been a poorly researched area with little known about the potential effect on the competence of practitioners who are re-entering the workforce, the best way to maintain competence during career breaks and the optimal way to assess the competence of returning practitioners. Furthermore, neither was able to find any clear consensus about the optimal time period after which assessment of competence should be introduced [4,5,6]. As the ROP registration standards for most National Boards are currently under review, it is timely to review the current evidence.


The aim of the systematic review is to develop a contemporary evidence base to support the development of effective ROP registration standards.

Research questions

  1. 1.

    Does the period of time for skills-retention and/or skills-fade vary between different health professions and/or at different stages of their career (e.g., new graduate, early career, mature or advanced practitioners)?

  2. 2.

    Is there evidence regarding when competency assessment should be completed?

  3. 3.

    Is there any evidence for the minimum number of hours of practice needed over a set period of time to maintain competency? Does this vary across professions or scope of practice?


A systematic review was conducted examining the above research questions based on selection criteria, methods and analysis that are summarised below.

The development of the research questions and search terms was informed by the two unpublished reviews discussed above.

The full protocol for the systematic review was recently published [7].

Searching and screening

The search terms and sources of literature selected for the review are based on Ahpra’s previous experience conducting a systematic review of the evidence for ROP standards, which covered journal articles and grey literature published between 1990 and 2014, as well as standard database preliminary testing.

Search terms

Search terms were selected for the health practitioner group, intervention, and outcome using a combination of the National Library of Medicine Medical Subject Headings (MeSH) and additional relevant search terms. Boolean operators were used to combine terms, and ‘wild cards’ were used to account for plurals and variations in spelling. MeSH is a standardised hierarchically organised vocabulary developed by the National Library of Medicine to index, catalogue and search biomedical and health related information. The MeSH terms for this review are presented in an Additional file 1: Appendix A and can also be found in the published protocol [7].

Sources of literature

The main sources of literature were:

  • Research databases including the Medical Literature Analysis and Retrieval System On-line (MEDLINE), Excerpta Medica dataBASE (EMBASE), and PsycINFO (using the OVID platform) and the Cumulative Index of Nursing and Allied Health Literature (CINAHL)

  • Search engines comprising Google Scholar and Google Advanced

  • Grey literature produced by other regulatory organisations, governmental bodies and professional associations.

  • Reference lists of papers and reports selected for review.

Inclusion and exclusion criteria

Articles and reports were included in the systematic review if they met the following criteria:

  1. 1.

    the focus of the report/article was ROP for those health professions regulated in Australia

  2. 2.

    reviews, original research, reports and theses

  3. 3.

    published from 1 January 2015 to mid April 2022

  4. 4.

    written in the English language

Articles and reports were excluded from the systematic review if they met the following criteria:

  1. 1.

    focussed on health and other professionals not regulated under the National Law

  2. 2.

    focussed on students or interns

  3. 3.

    focussed on regulatory standards other than ROP

  4. 4.

    opinion pieces, newsletters, conference presentations

  5. 5.

    published before 1 January 2015

  6. 6.

    not written in the English language.

Data extraction

Titles identified from the search were checked and the abstract was reviewed where the title appeared to be relevant to the research questions. Where the abstract met the inclusion criteria the full article was downloaded and assessed against the inclusion/exclusion criteria.

A Microsoft Excel spreadsheet was used to record bibliographic information about each article or report (e.g., author, date, title), the study population (e.g., health profession, size, country), intervention (e.g., return to practice, maternity/paternity leave), main findings, study type, the Australian National Health and Medical Research Council (NHMRC) level of evidence [8], decisions as to inclusion/exclusion (including any reasons for exclusion) and the quality assessment.

Quality appraisal

Where the full text of the article was assessed as relevant to the research question(s), a quality appraisal was conducted independently by two people. The protocol was modified to use the Critical Appraisal Skills Programme (CASP) checklists for systematic reviews [9], randomised controlled trials and cohort studies for quality appraisal of our yield. Items on the CASP checklist for systematic reviews were left out where they were not deemed relevant for a narrative review. The Joanna Briggs Institute (JBI) checklist for appraisal of analytical cross sectional studies was also used [10].


Study selection

Our search strategy identified 657 studies through database searching, with an additional 25 records identified through other sources, resulting in 540 records after duplicates were removed. Of these, 121 records were screened based on their title and 56 records were excluded. Sixty-five full text articles were assessed for eligibility based on their abstract, of which 53 full-text articles were excluded. Reasons for exclusion were either because they did not meet the inclusion criteria (44) or they were included in a review article selected for the qualitative synthesis [9] (Fig. 1).

Fig. 1
figure 1

Flow chart of studies selected for inclusion in the systematic review

Description of included studies

Twelve studies were included in the review; comprising five literature reviews (two systematic reviews, three narratives), a randomised controlled trial and five cohort studies which were assessed for quality using the relevant CASP checklist. There was also a cross-sectional study that was assessed using the relevant JBI checklist. The characteristics and quality assessment of the included studies are outlined in Table 1.

Table 1 Characteristics and quality assessment of the included studies

The two systematic reviews were assessed as of moderate quality, however, the focus of those reviews have limited applicability to the research questions [11, 12]. The three narrative reviews included in our study were rated as of moderate quality [13,14,15]. The findings of these reviews should be treated with caution as the number of participants in each of the included studies were low and follow-up times were short.

The randomised controlled trial was assessed as low quality because the number of participants in each arm (n = 12) was too small to show an effect size and participants in the intervention group had significantly more previous experience than those in the control arm [16].

Four cohort studies were assessed as of moderate quality [17,18,19,20] and the fifth was of low quality [21]. The main limitations of the first medium quality study were that 25% of study participants were lost to follow-up and the analysis did not consider the length of previous experience in the area of training [17]. The main limitation for the second study was that it was a small study that used a pre-test, post-test design to assess skill retention in simulated surgical activities [18]. The third study followed up almost all participants to assess the retention of their knowledge, but retention of technical skills was only assessed in 69% of participants, the time for follow-up was relatively short (3 and 6 months) and potential sources of bias are not discussed [19]. The fourth study did not include an analysis of the findings by previous experience with the point-of-care ultrasound techniques that were the subject of the study [20]. Limitations of the low quality cohort study included insufficient consideration of potential sources of bias or confounding and high loss to follow-up [21].

The cross-sectional study was assessed as of moderate quality, its main limitation being a reliance on self-reported issues rather than objective structured testing of performance after a break of at least eight months due to the COVID-19 pandemic [22].

Research question 1

Does the period of time for skills-retention and/or skills-fade vary between different health professions and/or at different stages of their career (e.g., new graduate, early career, mature or advanced practitioners)?

Variation in skills retention and/or skills fade between different health professions

The review identified only one study that compared skill retention or skills fade between different health professions [19]. It found that, following completion of a course on neonatal life-support, medical practitioners (paediatricians, anaesthetists and residents) (N = 74) had significantly higher retention of theoretical knowledge and technical skill compared to other health practitioners (midwives and other unspecified health practitioners) working in a neonatal setting (N = 40), at baseline, 3 and 6 months.

We found that almost all the published literature about skills fade focussed on medical practitioners, nurses or paramedics. The literature generally concentrated on specific skills associated with a higher risk to public safety, such as surgical or resuscitation procedures, for which the participants were generally health practitioners in active practice. While some of the studies examining skills fade in a pre-hospital setting comprised a mixture of health professions, none of the other authors stratified their findings by profession.

The review identified two narrative reviews, a randomised controlled trial and five cohort studies that examine skills decay in active practitioners following training that are of medium to low quality [13,14,15,16,17,18,19,20,21]. In addition, a highly relevant systematic review of skills fade conducted by the General Medical Council (GMC) in 2014 is cited by a systematic literature review identified in this study [11].

The GMC’s systematic review of skills fade concluded that clinical skills decline between six and 18 months, with a steeper decline at the outset and a more gradual decline as time passes [4].

In reaching its conclusions, the GMC review noted older empirical research indicates that:

  • speed/time to complete a task and accuracy are commonly used as main outcome variables to assess skill decay

  • the extent of skills fade is primarily largely determined by the degree of over-training and the complexity of the task [23].

Skills decay in active practitioners

The review found that most studies of skills decay are based on studies of novice or recent graduate health practitioners. Although the subject of this review is ROP of practitioners returning to the workforce, studies examining skill decay in active practitioners can provide useful information, particularly where the practitioner did not use the skill of interest during the lapsed time. Skills retention in active health practitioners varies with the task. For instance emergency airways management and defibrillation skills decrease between four and six months after training [4, 12, 15, 19], whereas laparoscopic surgical skills decrease six to eight months after training, and catheter insertion skills for haemodialysis do not decrease until after one year [4]. Longer breaks were generally associated with greater skill decay [14].

Evidence for the extent of skill decay was inconsistent, with one review concluding that practitioners assessed for retention of learned skills did not lose the skill completely when tested between four months and two years after training [4] while another reported a complete loss of skills in orthopaedic residents who had trained in simulation-based arthroscopic shoulder skills and not used the skills after six months [13]. Another reported an 80% retention rate of airway skills learned by anaesthetists who had undergone simulation emergency airway training [17].

Regular repeated assessments of basic surgical skills during surgical residents’ research years improved errors associated with rule-based procedures, but did not improve errors associated with the strategic approach to the surgical intervention compared to residents returning to surgery who had not undergone repeated assessment [18]. Residents and faculty perceive that the extent of skills decay is related to the level of skill difficulty, with the greatest loss perceived in technical skills, followed by a decrease in knowledge of procedural steps [4, 14].

Practice between assessments was reported to increase confidence and proficiency, with experts appearing to retain skills better than novices [4, 13,14,15]. For example, orthopaedic residents who had attended a 30-day intensive course that included basic fracture fixation techniques, application of casts and splints, and familiarisation with basic surgical instruments had significantly greater skill retention than non-participants after six months [13]. Similarly, residents who were exposed to surgical skills through on-call work during a research break from a clinical role of one to three years reported higher confidence to undertake more difficult surgical procedures at the completion of the research absence compared to surgical residents whose on-call experiences were limited to bedside care [14]. Retention of paediatric resuscitation skills eight months after training was improved by additional simulation-based training at four months [16].

The role of practice between assessments is also reflected in a study that showed a significant increase in the use of point-of-care ultrasound by active medical practitioners following completion of a continuing medical education course which was associated with increased skills retention at eight months [20]. Knowledge test scores increased from a median of 60% to 90% immediately post-course, which decreased to 87% eight months after the course. Median skills test scores for four common applications (heart, lung, abdomen and vascular access) increased from 36 to 72 points immediately post-course with a two-to-seven-point decrease after eight months. Pre-course knowledge and skills test scores were significantly lower for non-users compared to moderate-to-high users, however, this discrepancy was diminished immediately post-course and retained after eight months.

This review also identified a poorly designed study of skill retention in general practitioners following a one hour training session in melanoma diagnosis and treatment which found that although there was a significant increase in knowledge immediately after training, the 30% of participants followed up at the end of a year had significantly lower scores for appropriate management of cases compared to immediately after training [21].

Skills decay after time away from practise

A single study specifically examined the impact of skill decay after time away from practise used the subjective experiences of United Kingdom ophthalmologists returning to cataract surgery after a nationwide pause on elective surgery of eight months due to the COVID-19 pandemic [22]. This study found that two-thirds of respondents were unaware of any return to practice guidelines and only one in nine respondents had a formal plan made before returning to cataract surgery. Operating difficulties were frequently reported after returning to cataract surgery (29.1%), particularly by less experienced ophthalmic surgeons.

Variation in skills retention and/or skills fade between different stages of a health practitioner’s career

The strongest empirical evidence for variation in skills retention and/or skills fade between different stages of a health practitioner’s career comes from a study of medical practitioners returning to practice in the United States [2] which was published prior to the study period for this review. The study found that older age and longer time out of practice are significantly related to performance at assessment for a return to practice program. Assessment of skills at the time of re-entry showed that only a quarter of participants (15, 24%) were competent to return to practice with no or minimal education, while more than two-thirds (43, 69.4%) required remediation through a structured educational process and a small proportion (4, 6.5%) were assessed as requiring training in a residency program. Linear regression demonstrated that years out of practice and increasing physician age predicted poorer performance.

The only comparison of skill retention in novices and experts identified in the review found that experienced surgeons demonstrated a high degree of skills retention 18 months after training in laparoscopic procedures, whereas novices provided with the same training returned to the pre-training level between six and 18 months afterwards [4].

Research question 2

Is there evidence regarding when competency assessment should be completed?

There is no clear consensus on the period of elapsed time after which an assessment of competency may be needed. The findings indicate that the need for a competency assessment after an absence from practice depends on the skills and circumstances of the individual health practitioner [4, 14, 15, 24].

The GMC review of skills fade found no consensus on the period of elapsed time after which an assessment of competence should be introduced concluding that, based on the evidence it had collected, when a competency assessment should be completed depends on the skill and the circumstances of initial acquisition and interim practice [4]. Gawad et al. 2019 reached a similar conclusion, noting that surgical residents who were returning to clinical training after an extended period of research training were treated as if their clinical training had not been interrupted [14].

The systematic review of skills fade of emergency airway management found that none of the 10 studies (which covered hospital, military deployed and domestic settings) included in the review were able to recommend an ideal time for refreshing [15]. However, a Delphi study of military nurses (included in that review) found strong agreement among experts that a return to practice program should be required following a period of 18 months out of clinical practice, with a very strong agreement for nurses returning after two years [24], suggesting a need for competence assessment around this time.

Research question 3

Is there any evidence for the minimum number of hours of practice needed over a set period of time to maintain competency? Does this vary across professions or scope of practice?

The review identified only one reference to objective evidence for a minimum number of hours of practice to maintain competency. This was to the Texas Board of Nursing’s adoption of a four-year threshold for nurses returning to practice or transitioning to a new practice setting which is based on joint research conducted by the Texas Board of Nursing and Lamar University in 1994 that showed an increased risk of errors leading to disciplinary action in nurses returning to practice after more than four years [25]. Unfortunately, this research is unpublished.

Discussion and conclusions

Australian and international health practitioner regulators have specific requirements to ensure their registrants practise safely and professionally. In Australia, these requirements include a minimum duration of practice, maximum time away from practice, and maximum time between completing a qualification and starting practice. Health practitioners can become clinically inactive for a range of reasons including caring for family members, career dissatisfaction, health-related absences, the pursuit of other careers (including leadership and academic roles) [2] and the participation in approved research training or other educational opportunities [3]. Common concerns reported by health practitioners returning to work include anxieties about loss of clinical skills and knowledge, low self-confidence, work–life balance, and fears about how they will be perceived by colleagues [3, 26,27,28].

A systematic review conducted by Ahpra in 2015 (unpublished) of the evidence for ROP which focussed on Aboriginal and Torres Strait Islander health practitioners, Chinese medicine practitioners, medical radiation practitioners and occupational therapists found almost all studies were of medical practitioners, nurses or midwives. That review concluded that there was very little evidence about the amount of recent practice required to maintain competence, although the findings of a small study suggest that shorter breaks from practice (one to five years) have less impact on competence than longer breaks (more than five years) [2].

This review, which covers all health professions regulated under the National Scheme, found a small body of research on ROP published in the last five years. As for the previous review, almost all studies were of medical practitioners, nurses and midwives. Aside from three large literature reviews, these studies generally focus on specific areas of practice that require a high level of clinical skill and accuracy. In short, higher risk areas of practice, such as surgery, pre-hospital emergency medicine or military deployment.

Skills retention or fade

This review found there was consistent evidence that skills for high-risk procedures decline between six and 18 months after training, and resuscitation skills after four and 12 months. While the evidence is limited, its implications may be particularly important for health practitioners who have a low volume of cases requiring more complex skills, such as those who work on a casual basis and/or less than full time.

Our findings support those of the GMC 2014 review of skills fade which found that clinical skills decline over periods ranging from six to 18 months out of practice, with a steeper decline at the outset slowing to a more gradual decline as time passes, and for resuscitation skills, the decline appears to occur between four and 12 months after training [4].

Are these findings applicable to health professionals who carry out lower risk procedures? Our review found little research on ROP for other health professions. Future research may be warranted to develop an understanding of the risks of those returning to practice following a period of absence. Only one of the studies identified through this review directly compared skills retention or fade across professional groups [19]. This study showed that medical practitioners had better skills retention than other health professionals (midwives, nurses and paramedics) following training in paediatric resuscitation skills. While the studies of skills fade in surgical residents and the military medical corps focussed on individual professional groups, some of the older studies of skills fade following resuscitation training cited by the GMC’s review included mixed professions, generally medical practitioners and nurses, also including midwives and paramedics [4]. The authors found no difference in skills fade between health professions following training, the critical factors appearing to be the complexity of the task and repetition [29,30,31].

This review identified only one study that compared experts with novices [32]. It found that there was higher skill retention for laparoscopic procedures in experts than novices. Other studies found an association between greater skills fade and older age, longer time out of practice, and a lower volume of relevant cases [2, 33, 34]. Analyses of notifications to Ahpra have consistently found that older age is associated with an increased risk of notification. The inter-relationship between older age and poorer skills retention is likely to be complex. Training in procedures that involve fine motor skills was shown to decline relatively quickly [35] and poorer cognitive skills are also likely to play a part [36].

Competency assessment

How can regulators assess the competency of health practitioners who do not meet ROP standards? A literature review on best practice in the assessment of the competency of medical practitioners published in 2018 found that regulators need to be clear about what construct they may wish to assess [37] as the design will depend on whether they are interested in global judgements, specific behaviours, or the ability to demonstrate a professional response. While ethical challenges in medicine are universal, expected standards of performance may vary with the level of training and practice, requiring flexibility in the approach for assessment. Assessment of patient safety should centre on the candidate’s understanding of safety as a process, rather than technical competence.

Minimum number of hours over set time to maintain competency

The review was unable to find any evidence of the minimum number of hours required to ensure that competency is maintained other than a reference to an unpublished 1994 study of Texas nurses [25]. Research using regulatory data is needed to determine whether there is an association between time out of practice and risk of complaints, particularly those leading to disciplinary action and the optimal time for return to practice. As well as providing an objective foundation for the protection of the public, the findings could potentially also lead to less variation between jurisdictions.

Areas for further research

The review identified a number of areas where research could strengthen the evidence base for ROP standards. Suggested research includes benchmarking regulatory standards across jurisdictions and health professions, a case–control study to examine the risk of performance related complaints about health practitioners returning to work after varying times out of the workforce (e.g., one, three and five years), and extension of the systematic review to include self-regulated health professions and other regulated professions (e.g., teachers, lawyers).


The main limitation of this review is the paucity of high-quality relevant studies. Almost all the research has been carried out on medical practitioners, nurses or midwives, and none of the publications compare ROP across professions. The bulk of the research centres on skills fade for high-risk procedures rather than ROP per se. Generally, these studies have low numbers of participants (except some of the studies of skills retention following resuscitation training), relatively short follow-up times, and some of the studies of skills fade in active practitioners do not make clear the extent of practice between training and assessment. Another limitation is the exclusion of self-regulated health professions and other regulated professions (e.g., teaching, law). The findings of this review should, therefore, be treated with caution.


ROP continues to be an under-researched area with most studies focusing on medical practitioners, nurses and paramedics in active practice. Studies of skills retention following training by novices focus exclusively on complex procedures such as resuscitation skills or surgical techniques. Although the exact nature of the inter-relationship between them is unknown, factors that appear to influence skills retention include the length of time away from practice, level of previous professional experience and age. With the exception of paramedics, surgeons and health practitioners in a military environment, complex high-risk procedures tend to be the exception in most practice situations, which should be reflected in ROP requirements. It is not known if the findings generalise to less-complex aspects of practice. Based on the available evidence, there is a case for ROP requirements that allow practitioners to undertake less-complex aspects of practice when they first return to work, before taking on the more complex aspects of practice following on-the-job assessment. This approach would assist regulators to balance the need to ensure patient safety with minimal impact on workforce supply.

The review was unable to find either a clear consensus on the period of elapsed time after which a competency assessment should be completed or any objective evidence for the minimum number of hours practice over a set period of time needed to maintain competency, although the individual skill and circumstances of the individual health practitioner appear to be important factors. There is a need for further research based on regulatory data to ensure that regulatory requirements for ROP are based on the best available evidence.

Availability of data and materials

The data and material are listed in the reference list.


  1. Australian Health Practitioner Regulation Agency. Recency of Practice Melbourne, Australia: Australian Health Practitioner Regulation Agency; 2019.

  2. Grace ES, Korinek EJ, Weitzel LB, Wentz DK. Physicians reentering clinical practice: characteristics and clinical abilities. J Contin Educ Heal Prof. 2010;31(1):49–55.

    Article  Google Scholar 

  3. MacCuish AH, McNulty M, Bryant C, Deaner A, Birns J. Simulation training for clinicians returning to practice. Br J Hosp Med. 2021;82(1):1–13.

    Article  CAS  Google Scholar 

  4. Oates JL. Skills fade: a review of the evidence that clinical and professional skills fade during time out of practice, and of how skills fade may be measured or remediated. London: General Medical Council; 2014.

    Google Scholar 

  5. Academy of Medical Royal Colleges. Return to practice background document: evidence on return to practice. London: Academy of Medical Royal Colleges; 2012.

    Google Scholar 

  6. Federation of State Medical Boards. Report of the Special Committee on Reentry to Practice. Washington: Federation of State Medical Boards; 2012.

    Google Scholar 

  7. Main PAE, Anderson S. Evidence for continuing professional development and recency of practice standards for regulated health professionals in Australia: protocol for a systematic review. JMIR Res Protocols. 2022;11:4.

    Article  Google Scholar 

  8. Hillier S, Grimmer-Somers K, Merlin T, Middleton P, Salisbury J, Tooher R, et al. FORM: an Australian method for formulating and grading recommendations in evidence-based clinical guidelines. BMC Med Res Methodol. 2011;11:8.

    Article  Google Scholar 

  9. Critical Appraisal Skills Programme. CASP Systematic Review Checklist Oxford, 2018.

  10. Joanna Briggs Institute. Checklist for Analytical Cross Sectional Studies Adelaide, Australia: Joanna Briggs Institute; 2017.–05/JBI_Critical_Appraisal-Checklist_for_Analytical_Cross_Sectional_Studies2017_0.pdf.

  11. Campbell P, Duncan-Millar J, Torrens C, Pollock A, Maxwell M. Health and social care professionals return to practice: a systematic review. London: United Kingdom; 2019.

    Google Scholar 

  12. Thim S, Henriksen TB, Laursen H, Schram AL, Paltved C, Lindhard MS. Simulation-based emergency team training in pediatrics: a systematic review. Pediatrics. 2022;149(4):e202105430.

    Article  Google Scholar 

  13. Atesok K, Satava RM, van Heest A, Hogan MV, Pedowitz RA, Fu FH, et al. Retention of skills after simulation-based training in orthopaedic surgery. J Am Acad Orthop Surg. 2016;24(8):505–14.

    Article  PubMed  Google Scholar 

  14. Gawad N, Allen M, Fowlder A. Decay of competence with extended research absences during residency training: a scoping review. Cureus. 2019;11(10):15.

    Google Scholar 

  15. Maddocks W. Skill fade in military medical training: a literature review of supraglottic airway use in the prehospital environment. Journal of Military and Veterans’ Health. 2020;28(3):34–41.

    Google Scholar 

  16. Jani P, Blood AD, Park YS, Xing K, Mitchell D. Simulation-based curricula for enhanced retention of pediatric resuscitation skills: a randomized controlled study. Pediatr Emerg Care. 2019;45:8.

    Google Scholar 

  17. Clark CA, Mester RA, Redding AT, Wilson DA, Zeiler LL, Jones WR, et al. Emergency subglottic airway training and assessment of skills retention of attending anesthesiologists with simulation mastery-based learning. Anesthes Analgesia. 2022;10:1213.

    Google Scholar 

  18. Nathwani JN, Garren ME, Mohamadipanah H, van Beek N, DiMarco SM, et al. Residents’ surgical performance during the laboratory years: an analysis of rule-based errors. BMJ. 2017;219:226–31.

    Google Scholar 

  19. Paliatsiou S, Xanthos T, Wyllie J, Volaki P, Sokou R, Bikouli D, et al. Theoretical knowledge and skill retention 3 and 6 months after a European Newborn Life Support provider course. Am J Emerg Med. 2021;49:83–8.

    Article  PubMed  Google Scholar 

  20. Schott CK, LoPresti CM, Boyd JS, Core M, Mader MJ, et al. Retention of point-of-care ultrasound skills among practicing physicians: findings of the VA National POCUS Training Program. Am J Med. 2021;134(3):391–9.

    Article  PubMed  Google Scholar 

  21. Harkemanne E, Duyver C, Leconte S, Sawadogo K, Baeck M, Tromme I. Short- and long-term evaluation of general practitioners’ competencies after a training in melanoma diagnosis: refressher training sessions may be needed. J Cancer Educ. 2021;78:77.

    Google Scholar 

  22. Maubon L, Nderitu P, O'Brart DPS. Returning to cataract surgery after a hiatus: a UK survey report. Eye (London). 2021;On-line ahead of print.

  23. Arthur W Jr, Bennett W Jr, Stanush PL, McNelly TL. Factors that influence skill decay and retention: a quantitative review and analysis. Hum Perform. 1998;11(1):57–101.

    Article  Google Scholar 

  24. Kenward G, Marshall S, Irvine K. How much is enough? Using Delphi to explore the clinical-contact-time and return-to-practice needs of military nurses. Nurs Manage. 2017;24(2):20–4.

    Google Scholar 

  25. Texas Board of Nursing. Guidelines for transitioning of the experienced nurse back into clinical practice or into a new practice setting.

  26. Academy of Medical Royal Colleges. Maternity/Paternity Survey Results. London: Academy of Medical Royal Colleges; 2016.

    Google Scholar 

  27. Williams S, Bowbrick VA, Chan S. Return to work for higher surgical trainees: a deanery perspective. Bull R Coll Surg Engl. 2020;102(6):264–9.

    Article  Google Scholar 

  28. Nash E, Curry JI, Blackburn SC. Returning to the theatre after an interval. Bull R Coll Surg Engl. 2018;100(6):277–81.

    Article  Google Scholar 

  29. Pemberton J, Rambaran M, Cameron BH. Evaluating the long-term impact of the Trauma Team Training course in Guyana: an explanatory mixed-methods approach. Am J Surg. 2013;205(2):124–99.

    Article  Google Scholar 

  30. Yang CW, Yen ZS, McGowan JE, Chen HC, Chiang WC, Mancini ME, et al. A systematic review of retention of adult advanced life support knowledge and skills in health care providers. Resuscitation. 2012;83(9):1055–60.

    Article  PubMed  Google Scholar 

  31. Smith KK, Gilcreast D, Pierce K. Evaluation of staff’s retention of ACLS and BLS skills. Resuscitation. 2008;78(1):59–65.

    Article  PubMed  Google Scholar 

  32. Maagaard M, Sorensen JL, Oestergaard J, Dalsgaard T, Grantcharov TP, Ottesen BS, et al. Retention of laparoscopic procedural skills acquired on a virtual-reality surgical trainer. Surg Endosc. 2011;25(3):722–7.

    Article  PubMed  Google Scholar 

  33. Ali J, Howard M, Williams J. Is attrition of advanced trauma life support acquired skills affected by trauma patient volume? Am J Surg. 2002;183(2):142–5.

    Article  PubMed  Google Scholar 

  34. Riegel B, Birnbaum A, Augderheide TP, Thode HC, Henry MC, Ottingham L, et al. Predictors of cardiopulmonary resuscitation and automated external defibrillator skill retention. Am Heart J. 2005;150(5):927–32.

    Article  PubMed  Google Scholar 

  35. Sinha P, Hogle NJ, Fowler DL. Do lapraroscopic skills of trainees deteriorate over time? Surg Endosc. 2008;22(9):2018–25.

    Article  PubMed  Google Scholar 

  36. Skowronski GA, Piesah C. The greying intensivist: ageing and medical practice — everyone’s problem. Med J Aust. 2012;196(8):505–7.

    Article  PubMed  Google Scholar 

  37. Burford B, Rothwell C, Vance G, Beyer F, Tanner L. Best practice in the assessment of competence: a literature review. Newcastle: Newcastle University, United Kingdom; 2018.

    Google Scholar 

Download references


Not applicable.

Author information

Authors and Affiliations



PM and SA designed the systematic review, the literature review was conducted by PM and a blinded assessment of the quality of articles conducted by PM and SA. PM drafted the manuscript which was reviewed by SA. Both authors read and approved the final manuscript.

Corresponding author

Correspondence to Sarah Anderson.

Ethics declarations

Ethics approval and consent to participate

Ethics approval was not required as this is a systematic review.

Consent for publication

Both authors give their consent to publication.

Competing interests

The authors have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:

Appendix A.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Main, P.A.E., Anderson, S. Evidence for recency of practice standards for regulated health practitioners in Australia: a systematic review. Hum Resour Health 21, 14 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: