Asst Professor not impressed with bridging programs

You raise some vital questions that need very clear answers from the APS executive. I’ve been asked to offer a reply. All I can do is share my own thoughts and opinion on what you raise.

In terms of your question of whether a bridging program is ‘true’, my personal understanding is that it is only now being raised by the APS executive director as something to advocate for as a direct response to the increased pressure from members regarding their concerns over the two tier rebate system of Medicare.

From my perspective a bridging program is only now being put squarely on the table as an option to try and quiet the disquiet. Unfortunately though, this option only serves to reinforce the divisive presumption that psychologists working in clinical practice without a master’s degree in clinical psychology are less capable and therefore must ‘upskill’ through a bridging course to reach the same standard as their clinically masters trained colleagues.

Rightly so, you also raise the very real issue of expensive hoop jumping for the sake of proving ones worth in a system that does not readily accept the quality of your expertise as it stands. This ends up feeling more like a lot of money to spend for an invalidation process.

From my perspective, if there was any evidence to suggest that a clinical masters trained psychologist produced better outcomes for their clients, then a bridging program would make sense and would be of great benefit for anyone who wanted that magic upgrade toward a better quality of service.

But the reality is, psychologists on the lower tier provide equivalent outcomes when compared to their clinical master’s trained colleagues across all levels of caseload severity. Or to put it more colloquially, psychologists on the lower tier are just as good in treating clinical cases as their colleagues on the higher tier. To suggest otherwise is unfounded and unethical.

As a science we need to advocate for structures that are based on the facts.  And the fact remains that a psychologist can develop clinical practice expertise through a number of different pathways. The fact remains that psychologists can and do become experts in clinical practice spring-boarding from either the 4+2 pathway, the 5+1 pathway, the clinical masters pathway, and/or other specialist psychology master’s pathways.

To truly and honestly embrace and uphold our diversity as a profession we need to acknowledge and endorse not only the variation in specialist expertise we have but also the various pathways through which we attain that specialist expertise.

Ultimately we need to recognise and endorse all forms of prior learning achieved through professional development and related practice experience that develop and enhance the practitioner’s very real and legitimate areas of specialist practice expertise. When the APS does this, then it is unashamedly advocating on behalf of all of its members for the benefit of its members and the community in which they serve.

On the other side of the coin, when a professional body cannot overtly and publicly endorse their member’s practice expertise when such expertise is clear for all to see through the outcomes of their practice, then that professional body has stopped advocating for its members and for some reason is choosing instead to invalidate their member’s expertise to the detriment of their members and the community in which they currently serve.

Dr Clive M Jones Dipt, DipCouns, DipLC, BEd, MEd, GradDipPsych, PhD (psych), MAPS, MCSEP, MCCOUNSP

Asst Professor – Bond University Faculty of Health Sciences & Medicine

Emailclive@clivejones.org

Websiteclivejones.org

39 thoughts on “Asst Professor not impressed with bridging programs

  1. What seems to be most confusing is what is the real issues. It is delivery of service standards?; is it recognition as a professional?; is it money based on rebates? or what. Reading all comments seems to blur all these line with no discernible way forward. I go back to my other comments that the reality is the system was flawed right from the start. If its a delivery of service (hence money) issue then the item numbers should be based on the service delivered – CBT, IPT etc and therefore the ‘title’ of what ever you are calling yourself becomes irrelevant. But are we also not missing the issue where the delivery of service is actually dictated by the medical profession that has probably the least training that any psychologist? That is still the bigger issue that continues to drive the divide in our profession.

    Because they have to refer for services informed by the World Health Organisation, 1996, Diagnostic and Management Guidelines for Mental Disorders in Primary Care: ICD-10 Chapter V Primary Care Version – there is the automatic impression that the ‘clinical psychologist’ is more apt to treat these conditions (depression or anxiety) because of the extensive training the ‘clinical psychologist’ has to do with the DSM and or the ICD manuals. Of course this knowledge has little to do with treatment outcomes but be that as it may – if it is indeed the driving factor in determining the key differences in ‘fee for services rebates’ then why not as a ‘psychological profession’ mandate that every psychology program in Australia must have extensive training in the DSM/ ICD manuals in order to be granted degree granting authority.

    Maybe its time to start asking different questions rather that moaning about current differences that are not only within the APS but also outside the APS across the Department of Health, the AMA, RACGP, RANZCP etc.

    Maybe its just easier to beat each other up than stand together to address the larger issues.

    1. Yes, there are several real issues here. There is the issue of the Medicare rebate disparity. There is the issue of discrimination for employed roles and contracts. There is the issue of limiting the services that many psychologists are able to offer under Medicare. There is the issue of reports not being accepted by Centrelink and other entities. The aforementioned issues are all entwined, and seem to have their roots in the decision to rebate psychologists differently based on whether they have endorsement in clinical psychology. There is also the issue of GPs dictating services. I would hope that the APS is advocating on our behalf on ALL of these issues. Until recently I was confident that they had been, but seeing the FOI documents has caused me to believe otherwise. I do have some hope that this is changing, but I will be watching much more carefully from now on.

  2. Completely agree. I was teaching clients about Martin Seligman’s Positive Psychology today and some of his VIA jumped out at me in relation to what is happening in Psychology in Australia.

    Maybe the APS Board should familiarise themselves with these.

    WISDOM AND KNOWLEDGE – Cognitive strengths that involve the acquiring and use of knowledge.

    Judgement/Critical Thinking/Open-Mindedness
    Thinking things through and examining them from all sides; not jumping to conclusions; being able to change one’s mind in light of new evidence; weighing all evidence fairly.

    COURAGE – emotional strengths that involve the exercise of willpower to achieve goals in the face of opposition.

    Integrity/Genuineness/Honesty
    Speaking the truth, but more broadly presenting oneself in a genuine way and acting sincerely; being without pretence; taking responsibility for ones feelings and actions.

    JUSTICE – public strengths that underlie healthy community life.

    Citizenship/Duty/Teamwork/Loyalty
    Working well as a member of a team; being loyal to the group; doing ones share.
    Fairness and Equity
    Treating all people the same according to notions of fairness and justice; not letting personal decisions bias decisions about others; giving everyone a fair chance.
    Leadership
    Encouraging a group of which one is a member to get things done and at the same time maintaining good relations within the group; organising group activities and see they happen.

    Maybe we should follow what we teach!

  3. References please. Would love to see the paper you are citing that is evidence of equivalence in outcomes. Thank you

      1. Oh yes, this. If you read the original research for yourself you will see that the authors themselves acknowledge that the results should be not used for comparative purposes. The senate enquiry then also ‘believes the conclusions drawn are readily disputed based on the very poor methodology of the evaluation and therefore of limited value as a basis for decision making going forward’.

        The major methodological flaw is that psychologists involved were able to choose which data was included (10 clients each) and the unrealistically high effect sizes (higher than even under the strictest RCT conditions) show that they have chosen their best client outcomes. If ANY psychologist is able to choose their 10 best cases we will all look like we are doing excellent work and any true differences are neglected.

        This shows the importance of reading and understanding original peer-reviewed research for yourself and not assuming that someone with their own agenda summarising the aspects that support their argument is a true reflection of fact.

        Unless new data suddenly comes to light (which would be amazing) the FACT remains that there is no data to prove that clinicals get better outcomes but there is also no data to prove that there is equivalence between generals and clinicals.

        1. “If ANY psychologist is able to choose their 10 best cases we will all look like we are doing excellent work and any true differences are neglected.”

          I have seen reviews post hoc analyses etc but I haven’t read the original paper you refer to, so that is an obvious caveat. [Actually could someone please provide a link to the original]. However I have some difficulty following your reasoning that the above is a methodological flaw. The assumptions being alluded to are that clinical psychs get better outcomes than other psychs and that provides a rationale for the two-tier system. So where would you expect those differences to be? You say not in the 10 best cases, would they be in the 10 next best, the 10 worst, or perhaps somewhere else in the range?

        2. Thank you for that clarification. I know similar points were raised when the author of the paper posted it in a Facebook group for high school and undergraduate students of psychology. One high school student asked how the author arrived at his conclusion when his source research explicitly stated that “”The study design meant that it was not appropriate to aggregate data across these three groups or to explore whether statistically significant differences existed between the groups.” but the question went unanswered so I appreciate your clarification of the point.

        3. Hi Curious, Thank-you for sharing your thoughts.

          Unfortunately, I cannot escape a long answer because there is so much to cover even just in offering a summary. I do hope you and others will be up for the read

          If you haven’t already, I encourage you and everyone who is interested, to read the original publication that is being debated and then read through all the published arguments that debate pros and cons of the original article in question. They can all be found in the latter 2011 editions of the Australian & New Zealand Journal of Psychiatry

          I think our APS ProQuest access only offers a summary of the original article and doesn’t seem to have available the additional published debate around it. So, you will need to either purchase them online or access them through a Uni library. It would not be right for me to put the articles up online in this forum due to copyright.

          I’m happy to offer a very brief summary on each of the publications here now, and also my own thoughts on their thoughts, for anyone interested. Also, I’d like to offer further clarity on the rationale behind the additional post-hoc analysis of the original research, by Prof Mark Anderson in Dec 2016, that by all accounts has seemed to have made some folk a tad angry.

          Firstly though, the reference to the original article causing all the debate is referenced below:

          Pirkis, J., Harris, M., Ftanou, M., & Williamson, M. (2011). Australia’s Better Access initiative: an evaluation. Australian and New Zealand Journal of Psychiatry, 45, 726-739.

          Secondly, references to the three published critiques talking through pros and cons of the original article are referenced below:

          Allen, N. B., & Jackson, H. J. (2011). What kind of evidence do we need for evidence-based mental health policy? The case of the Better Access Initiative. Australian and New Zealand Journal of Psychiatry, 45, 696-699.
          Jorm, A. F. (2011). Australia’s Better Access initiative: do the evaluation data support the critics? Australian and New Zealand Journal of Psychiatry, 45, 700-704.
          Hickie, I. B., Rosenberg, S. & Davenport, T. A. (2011). Australia’s Better Access initiative: still awaiting serious evaluation? Australian and New Zealand Journal of Psychiatry, 45, 814-823.

          Then finally, the ‘right of reply’ on the three critiques above was given to the authors of the original article and is referenced below:

          Pirkis, J., Harris, M., Ftanou, M., & Williamson, M. (2011). Not letting the ideal be the enemy of the good: the case of the Better Access evaluation. Australian and New Zealand Journal of Psychiatry, 45, 911-914.

          Quick summary overview of the original research:
          The authors of the original article that is being heavily criticised by some, Pirkis etc al (2011) conclude their research on the outcomes of psychologists and GP’s treating under Medicare by saying “The findings suggest that Better Access is playing an important part in meeting the community’s previously unmet need for mental health care. The initiative has enabled patients with clinically diagnosable disorders and considerable psychological distress to access care; many of these patients have not received mental health care in the past. These patients’ mental health status improves markedly during the course of their care; their symptoms reduce and their psychological distress diminishes. These achievements should not be under-estimated.” (p. 738).

          Pre- and post- treatment scores of each group (clinical, general and GP) were run through t-tests as part of the original research and were shown to be statistically significant. So on this alone we can all say ‘yay’ to us psychologists and ‘yay’ to the GP’s because the before and after treatment changes were found to be statistically significant and so do occur for psychologists and GP’s treating under Medicare.

          So now on to the critiques of the research.

          Summary of the three (3) critiques:

          Of the three critiques Hickie et al (2011) was by far the most damming of Pirkis et al’s research.

          Hickie et al’s critique slams every aspect of the research. This is seen straight up through their opening statement; “There are few more heated issues in Australian mental health than determining the value of the Better Access to Psychiatrists, Psychologists and General Practitioners through the Medicare Benefits Schedule (Better Access) initiative. In our view, the need for genuine, well-designed and ongoing evaluation of its impacts has never been seriously addressed. Sadly, the publication of aspects of the extremely limited government-purchased ‘patient outcomes’ evaluation adds very little to our knowledge base.” (p.814).

          Ultimately Hickie et al do not think the research conducted does anything to confirm the impact of improved mental health for those being treated under Medicare by clinical psychologists, general psychologists or GP’s.

          I need to point out that most of the arguments proposed directly to me against the original research basically comes from the Hickie et al (2011) school of thought. I can elaborate on this in another post if needed. But in summary a lot of it boils down to what many would see as the unfortunate ongoing clash between the science of medicine and the science of psychology rather than being sound confirmation from Hickie et al of the invalid and unreliable nature of the findings of the research he is criticising.

          I would suggest that the argument proposed by Hickie et al calls for a very strict and uncompromising adherence to a methodology and analysis most suited to medicine/psychiatry and the randomised controlled trial of the ‘pill’ while showing what I would call a contempt, (possibly stemming from a lack of understanding) for other forms of valid and reliable methodology and analysis in research that is often better suited to psychology, particularly in field research of clinical practitioners like the project in question.

          The old chestnut argument of “it isn’t real research and they aren’t real results unless it’s a randomised controlled trial” is what climate scientists constantly debate over with their critics.

          Anyway, the original author’s in their final right of reply speak to this in terms of their address of the methodological concerns raised by Hickie et al. So, the author’s right of reply is definitely worth a read. I make mention of this again later.

          The second critique by Allan & Johnson (2011) was not a happy one either when stating, “Perhaps the most provocative aspect of the report, given the self-interest of the professional groups involved, is the separate analyses of the clinical psychologist, generalist psychologist, and general practitioner groups. Although Pirkis and colleagues state that it was not appropriate to either pool these results, nor to perform statistical comparisons between the groups, the results are nevertheless given some discussion in the paper, and will undoubtedly be like catnip to professional groups who are sometimes more interested in protecting their members ’ They go on to note though “…in Prof Jorm’s companion editorial, he already attempts to draw some conclusions from these patterns of data (although we hasten to point out that he does not belong to one of the professions being evaluated and is therefore in that sense a relatively unbiased observer).

          There were a range of concerns addressed in this critique regarding the whole Medicare system set up for psychological services. Among other things, a key point in the quote above is that Allan & Johnson (2011) think the findings would only serve to draw out the feral cats of the psychology fraternity to fight over territory. In retrospect probably could say they were quite on the money with that point.

          So now to Jorm’s (2011) critique. Jorm was noted as being the “relatively unbais observer” by Allan & Johnson (2011). Jorm goes about his critique by addressing 10 key criticisms of the original research by raising each one as a question and then addressing each one mostly in favour of the original research.

          Jorm (2011) offers the following statement in relation to the comparison of pre- and post-treatment mean scores;

          “The evaluation by Pirkis and colleagues provides data on symptom scores pre- and post-treatment for clinical psychologists, general psychologists and GPs. From these data it is possible to calculate uncontrolled (pre- versus post-therapy) effect sizes. The standardized mean change score was 1.31 for clinical psychologists, 1.46 for general psychologists and 0.97 for GPs. The effect sizes for the two groups of psychologists are similar and are comparable to the mean uncontrolled effect size of 1.29 reported in a meta-analysis of psychological therapies in routine clinical settings. On the data available, it appears that general psychologists produce equivalent outcomes to clinical psychologists and perhaps better average outcomes than GPs” …. “At the time that Better Access was introduced, I largely shared the views of the critics… Having examined the data, I have largely changed my mind… although it needs some tweaking at the edges to reduce remaining inequalities.”

          In the context of Jorm’s comments above it’s important to clarify again that in the original research the difference in pre- and post-treatment scores of each group were run through t-tests in the original research so the change found within each group pre- and post-treatment was statistically significant.

          Jorm highlights the importance of this by clarifying that we can look at the post-treatment mean scores of each group (clinical, general and GP) simply as they stand in their indisputable numerical equivalence across the clinical, general and GP groups.

          So, knowing that the post treatment score of each group did not come about by chance we do not, in the context of basic descriptive data of numbers, need a test of statistical significance to know that one post mean score of 1.31 (clinical) when compared to another mean score of 1.46 (general) is a demonstration of equivalence in mean scores. To say it is not equivalent is like arguing that 1 is not the same number as 1. So, in this context the results do allow for a very apt use of the expression – The numbers speak for themselves.

          The slight discrepancy of 0.15 between post-treatment scores of clinical and general psychologists is only slight. So, should the original research have to run a t-test to determine if that slight difference of 0.15 between psychology groups was statistically significant? Well no. Firstly, because there really is no need because the post mean scores do speak for themselves and in that sense, there would probably not be any statistically significant difference between the two psychology groups post treatment scores anyway because the difference in the post test scores is negligent. Secondly, the original research was not set up in its design for a between group t-test analysis anyway.

          Now very importantly – just because a t-test was not run on the comparison of mean score difference between the clinical and general psychologist group does not make the post-treatment scores of each of those groups invalid. Or to put it another way, just because a t-test was not run and could not be run for between group analysis DOES NOT make the t-test results of the within group analysis invalid.

          The fact remains that the clinical group and the generalist group of psychologists both had statistically significant changes in scores pre- and post-treatment. The test was set up to determine this and it shows this.

          So now in comes the post hoc analysis completed by Prof Mark Anderson in Dec 2016
          The post hoc analysis did not go against any caveats the original researcher’s flagged in between group data tinkering. The post hoc analysis did not do any stats BETWEEN GROUPS, it was just examining the effect size of pre-post change WITHIN each group. This went along with the original design and set up of the project of a within group analysis.

          Effect size analysis (cohen’s d) uses mean scores and standard deviations, which were published in the original article and so easily accessed from the tables of the published articles. So nothing unethical or untoward there.

          The post hoc analysis of within group pre- and post- treatment change in mean scores confirmed that not only were the pre- and post-treatment scores within each group statistically significant, the actual size of the effect of treatment was shown to be large for both clinical and general psychologists. But did the post hoc analysis show one group of psychologists being larger in their treatment effect than the other group? No, the effect size of treatment for the clinical psychology group was the same in effect size as the general psychology group.

          So what we can say as a result of the post hoc analysis is that the clinical psychology group and the general psychology group had the same effect. That’s simply stating the facts of the stats.

          The final right of reply by the original authors

          Below is just a taste of what is addressed in the author’s right of reply. I have not copied their replies, just quotes from them outlining what has been raised as a criticism. You will need to get the full article.

          The quotes I present below shed light on how much of the criticism raised speaks more to medical science confusion over methodological design outside the scope of the classic randomised controlled trial. Hence it also sheds a little light on why Psychological Science is always considered the poor cousin of Medical Science and the psychologist the poor cousin of the psychiatrist.

          Another concern I have of the criticism Hickie et el makes of the research in question is the presumption of unethical behaviour by the practicing psychologists who chose to participate in it.

          The criticisms really became a ‘no holds barred’ ‘grab at anything you can and throw at them’, approach from the psychiatric fraternity. It’s disappointing to see some psychologists joining in.

          Quotes from the right of reply
          • “Both Allen and Jackson and Hickie et al express disquiet about the fact that participating providers entered data for patients. The implication is that providers may have intentionally altered the data that they were given by patients before entering it.”

          • “In addition, Allen and Jackson are suspicious that our providers may have given inaccurate responses of their own. For example, they suggest that providers may have indicated that they were delivering CBT when in fact they weren’t, and/or that they may have assigned incorrect diagnoses to patients.

          • “Allen and Jackson also express concern about our assessment of outcomes, intimating that patients may have provided socially desirable responses on the K-10 and DASS-21 because they knew that the providers who had recruited them hoped they would improve”.

          The authors address each of these concerns and many more in their right of reply to all the critics. Here again is the reference to their right of reply if anyone wants to obtain the full copy. – Pirkis, J., Harris, M., Ftanou, M., & Williamson, M. (2011). Not letting the ideal be the enemy of the good: the case of the Better Access evaluation. Australian and New Zealand Journal of Psychiatry, 45, 911-914.

          On so many levels the criticisms thrown at this research by Hickie et al and others reminds me of an encounter I recall on a Q&A program where physicist Prof Brian Cox was confronted with a ‘no holds barred’ ‘grab at anything you can and throw at them’, argument from One Nation Senator Malcolm Roberts about the flaws of climate science. In the argument, I recall Senator Roberts raising the notion of climate science not being a true experimental approach because there are no randomised controlled trials. And so therefore it’s not real science. He then went on to discredit the climate scientist and the data.

          Here’s a link to a write up and small video snippet of the physicist David Cox’s encounter with the Pauline Hanson Senator.
          http://www.news.com.au/entertainment/tv/particle-physicist-professor-brian-cox-mocks-one-nation-climate-change-denier-malcolm-roberts/news-story/b6e11a59cc39b6ea96ea89377da239a7

          Anyway. If anyone is keen, chase down and read through all the articles I’ve refenced above to make your own informed decision about the research and the data it provides.

          Kind Regards
          Clive Jones PhD MAPS

          1. Hi,

            Thanks for your detailed response and yes I have read all of these papers and mostly agree with your summary of them. My stance is still that unfortunately this is the only data we have to support equivalence and it is very weak. In my opinion, as someone who knows a lot about the outcomes literature, the biggest flaw is still the self-selection of outcome data for the psychs involved. Had participants had to provide all data this would have been a more true representation of real world outcomes (and therefore would have sat closer to an effect size of .7 overall).

            Can I please be very clear that I am not arguing that generalists and clinicals are not equivalent. I have no idea and have not made any assumption as the data simply is not there. It would be great if this data was collected on a larger scale to answer this question once and for all in the best interests of the clients that we serve as a profession. My hunch is that there will be equivalence but at the moment we really can’t say that as a matter of fact.

          2. Hello Dr Jones,

            Thank you for your long detailed post. I remember when you were generously posting similar information on the Facebook group for high school and undergraduate students of psychology for peer review purposes. As I recall one of the high school student asked how Mark Anderson arrived at his conclusion when the original study by Pirkis rt al. explicitly stated that “”The study design meant that it was not appropriate to aggregate data across these three groups or to explore whether statistically significant differences existed between the groups”, making it statistically meaningless to compare effect sizes but the question went unanswered so I appreciate if you could clarify on this point?

            1. Hi J Dwyer,

              Thank you for your question. I do need to clarify firstly though that the Facebook forum you mention is called Provisional Psychologists Forum Australia and from my understanding was set up to provide provisional psychologists in Australia a forum to connect and share their provisional psych based experiences for support etc.

              Most members of the forum, from my understanding are all our 4+2’s, 5+1’s and master’s students across Australia with some undergrads and some fully registered psychologists too. I certainly wasn’t aware of ‘high school’ ring ins 🙂

              The dialogue I had on the forum, which you make mention of, ended up tracking off on to a number of different threads with a lot of different questions being asked of me on a lot of different topics and angles. So unfortunately this question must have slipped by me unnoticed. Please pass on my apologies to the high school student for missing that one.

              Now to the question – for a detailed answer, my post to curious offers very specific and detailed clarity on it. So I encourage you to take a slow read through to have confirmed all the points I raise on the matter.

              To offer a ‘spoiler’ to the longer answer contained in the reply to Curious, the post hoc analysis stayed very much within the parameters of the research by conducting a WITHIN group analysis of the statistically significant pre- and post-treatment scores and therefore did not by any stretch of the imagination “aggregate data across these three groups or to explore whether statistically significant differences existed between the groups”. I

              So all the post hoc analysis did was to clarify the effect size of each group. It did not aggregate data between groups. It did not look at statistical significance between groups. It examined effect size within groups. I encourage you to read through my explanation to Curious as this is teased out more in my reply there.

              Kind Regards
              Clive Jones PhD MAPS

              1. Thank you, Clive for your detailed illuminating response. RAPS would like to post this question to you:
                Has any research justified the position of Ian Hickie in supporting a two-tier system in which so called specialists should be paid more than other psychologists? Or is this a bungled effort to reproduce the medical model of general practitioners and specialist in the field of psychology?

                1. Hi RAPS crew :),

                  In answer to your question, there is no evidence, to my knowledge, that shows any support for the two tiered system of Medicare. It’s original introduction and continued support is clear evidence of a stats 101 furphy in the ongoing rejection of the null hypothesis of no difference between groups. Because there is no evidence of difference we need to continue to accept the null hypothesis of no difference until, if ever, proven otherwise.

                  Some speak strongly to a mystical intuitive hunch of a difference. Well that’s all great but that is a prompt for research not a prompt to implement a policy that structures the Mental Health Service Provision in a way that ends up imploding our profession.

                  Then bring in the research published in 2011 that ends up offering a pretty strong hint of us having to continue holding the null hypothesis of no difference in outcomes between groups and the publication gets slammed by some high profile psychiatrists. While this is simply a statement of fact, can I infer anything from this? Not really. But it’s certainly got me intrigued and worried that as a profession we are certainly a long way from having our own autonomy as a profession.

                  Is it the Medical Model trying to be used as a template to cookie cut our profession… ? Certainly a possibility. Either way, what ever the template is, we don’t fit it.

                  I have a question… was I naive pre-2006 or was our profession before the Medicare 2-tiers naturally and mutually more respectful of the different pathways taken to registration, private practice and the caseloads we were managing?

                2. Hi RAPS crew,

                  Adding to my initial reply to your question, above re: Hickie; a quote from the Chair of the Australian Medical Association’s (AMA) Council of General Practice, Dr Brian Morton, expressed full support of the 2011 research on Better Access that Hickie has slammed.

                  Specifically, Dr Morton said the government’s independent review of Better Access found, “access has been substantially improved and continues to be improved; it is achieving positive outcomes, is cost effective and impacts on people with moderate to severe common mental disorders”. (Australian Doctor, July 1 2011). He is referring specifically to the 2011 research discussed in detail through the RAPS blog.

                  When examining public statements from Hickie he has been highly critical of just about everything to do with Better Access to Mental Health including research completed on it, the professional’s who treat and refer through it, and the impact of the whole administrative system on mental health service provision in this country.

                  Clearly getting frustrated by Hickie’s constant slamming the AMA chair of general practice went on to say to Prof Hickie; “Rather than harshly criticising a program that has been running well, the AMA believes that you would be better focusing your attention on better addressing the remaining gaps in service delivery. A rising tide can lift all boats.” (Australian Doctor, July 1, 2011).

                  Standing in full agreement with the AMA on this!
                  Clive Jones PhD MAPS

              2. Hello Dr Jones, Thank you for your detailed response. The Facebook group you posted on is indeed comprised primarily of high school students and undergraduate students, as evident by the frequent number of questions about entering undergraduate university courses and the advertising of events in the group aimed at high school students. I hope that does not effect the quality of the peer review you were conducting on the article, which you stated was the purpose behind your posts in the Facebook group. I am tremendously appreciative of the long detailed response you provided, but unfortunately it does now answer the question of how a comparison of effect sizes between different groups can be reliably conducted when the original authors conclusively stated that the “study design meant that it was not appropriate to aggregate data across these three groups or to explore whether statistically significant differences existed between the groups”. Perhaps the best way to resolve the issue is for the author to publish his findings in a peer-reviewed journal, rather then simply on his Linked-In profile. I believe he and others have stated on multiple occasions that his findings were so conclusive no further research in the area was necessary; I am sure that if this was the case then there would be no difficulty in having such findings published and peer-reviewed by experienced psychologists and academics, who may have a different perspective then the high school and first year university students these findings have been forwarded to previously.

                1. Hi J Dwyer,

                  Thank you for your question. And also thank you for encouraging me to chase up publication of the post-hoc info with Prof Anderson.

                  But to the question you raise, the post hoc analysis was all about ‘within group comparison’ as opposed to ‘between group comparison’ and ‘measurement of effect size’ as opposed to ‘statistical significance’.

                  So what this means very clearly and specifically is that the post hoc analysis did not “aggregate data across these three groups” and it did not “explore whether statistically significant differences existed between the groups”

                  An understanding of effect size analysis through the cohen’s d does make this clearer to see that an exploration of statistical significance did not take place. And an understanding of the difference between ‘within group’ and ‘between group’ analysis makes it easier to understand that the post hoc did not aggregate data across the groups.

                  The post hoc confirmed effect size within each group not aggregate anything across the groups. It also did not seek statistical significance of difference between each group. It confirmed the effect size of each group.

                  It may help by taking a step back to help ‘sure up’ understand on the difference between:
                  1. a)statistical significance and b) effect size
                  2. a) post treatment comparison between each group and b) pre- post treatment comparison within each group.

                  The research recommended not to do a) of 1 & 2 above. The post hoc did not do a) of 1 & 2 above. The post hoc did b) of 1 & 2 above. What the post hoc did (in the b) of 1 & 2 above fitted well with the original purpose and design of the research.

                  Please feel free to ask questions on anything you don’t understand. That would be better than me giving an ongoing spool that may not help in clarifying your confusion.

                  Kind Regards
                  Clive Jones PhD MAPS

                2. Hi J Dwyer,

                  Apologies for my delayed response on some aspects of your question posed to me, but I needed to confirm some of the facts first to make sure I didn’t post anything inaccurate.

                  To the issue you raise over the Provisional Psychologists Facebook Forum where you have suggested it “is indeed comprised primarily of high school students and undergraduate students, as evident by the frequent number of questions about entering undergraduate university courses and the advertising of events in the group aimed at high school students”

                  To address this speculation directly, I’ve chatted to one of the administrators of the forum in question and they gave me a ‘mud map’ estimate of around 3/4 being a mix of 4+2 and 5+1 provisional psych’s and the other 1/4 a mix of honours students and fully registered psychologists. The administrator I spoke to acknowledged there would be a few outliers to this ( e.g., some undergrads, etc) but its quite clearly set up as a forum predominately of provisional psychs with an additional cohort of honours students, some master’s and also fully registered psych’s. Of the individual’s commenting and discussing specifically on the posts I put up, a quick scroll through can see quite plainly they were a similar mix to the broader demographic of the forum but also actually included a few PhD’s and possibly skewed toward a few more masters and practising psych’s when compared to the overall demographic of the members of the forum. I also tried really hard to find that High School Student’s unanswered question but to no avail.

                  It’s actually quite disappointing that I have to talk to you on this next point J Dwyer; to clarify not only my utmost respect for peer review but also my own small contribution to it, I’ve been a peer reviewer of manuscripts for the Journal of Health Psychology, Stress Anxiety & Coping, Australian Psychologist and The Australian Journal of Psychology. I have/do also, at various times, receive invitations to externally examine and grade honours, master’s and doctoral theses across clinical, counselling and sport psychology specialities. So in this context rather than skirt peer review I have over the years played a direct part in contributing to it. It’s something I really love to do, time permitting, and I particularly enjoy those times I’m inspired by the quality of the work I have the good fortune to read. This often is particularly true of the thesis work. A quality doctoral thesis to me reads like a good red wine tastes 🙂 The levels of complexity, the depth, and richness in colour… 🙂

                  I will address your concerns about me throwing data around willy-nilly to thousands of colleagues across Australia and internationally via my online professional network on LinkedIn in a future reply.

                  Kind Regards
                  Clive Jones PhD MAPS

          3. Thankyou so much! A brilliant beam of light shining into the mruky shadows of muddied subject matter and self-interested biases as we struggle to restore the dignity of our profession from the ditch of medicine’s poor, mute cousin.
            Thanks Doc. One thing does concern me though, the reference to “feral cats”. might be considered demeaning to some although it is really a colourful metaphor with no intended harm from your repeating of it; could it reflect badly on the profession of psychology to publish such a detraction from the views of the disgruntled and disenfranchised members of our profession and other interest groups by calling them feral cats? Personally I don’t have a problem with colourful metaphors and direct expressions in a robust debate about things that we all care passionately about (including the importance of controlled, dispassionate subtleties of best practice psychological research), but there are forces at work to gazump contributors to the debate on the basis of impuning the profession’s credibility by causing offence. What say you on the subject of the choice of expression “feral cats” and whether its use in robust debate brings the profession into disrepute.
            With deep respect to you, the sanctity of ethical social scientific research, the holy art of Psychology and your hard bitten wisdom.
            Gregory Goodlcuk

            1. Thank you for pulling me up on this Gregory! I totally agree and sincerely apologise for any offence I may have caused anyone in the use of that metaphor. The goal was simply to raise the important need for us all to keep our scruples in the heat of some discussions, but as you’ve rightly flag Gregory my use of the term was in itself ‘feral’ and wrong. So again, apologies everyone and thank you Gregory for pulling me up on it.

              Clive Jones PhD Maps

              1. Thanks for your gentlemanly response kind Sir Clive. Personally I believe, whilst freedom of speech doesn’t give us the write to call “fire” in a cinema (who said that?) or personally attack people, a robust democratic discussion of broad and far-reaching issues needs to tolerate a little creative metaphor and simile by way of quick, visual shorthand, occasionally in order to remain a viable and vital discussion, without crying foul at every opportunity to silence or discredit alternative points of view because they may ‘offend’ a little. It would be sad if we all continued to walk on eggshells with gags in our metaphoric mouths for perpetual fear of offending someone. It is better if people indicate if they are offended and we welcome the feedback to adjust the tenor of the conversation accordingly. In some situations, “Perfect is the enemy of good.” Remember we are all doing this stuff in our spare time… often late at night so lets be realistic about our expectations. I would also like to take this opportunity to apologise for the typos in my posts which are largely due to an injure right hand I am attempting to rehabilitate to the keyboard after dog attack. I also apologise if I have offended anyone with my colourful, shorthand metaphors. I aim to disturb not offend. to disturb cherished false beliefs of the comfortable that cause real disadvantage and discomfort and disturbance to very good psychologists who are being trashed (metaphor too much?) by the effects of the actions of the Comfortably ensconsed
                self-prooclaimed elites within the APS. There is a difference between hearing an uncomfortable truth or opinion that you find disturbing on the one hand, and on the other an offense. The standard of care needs to be flexible enough to allow robust debate and word pictures to convey passion and reason. Thats the point of a chat/blog isn’t it?
                A melting pot of ideas and opinions.
                This is also moderated by someone sensible and sensitive and I am happy with the level of censorship that it allows for – anything higher would be just too Orwellian. Although Big Brother/Sister is almost certainly watching and plotting for picking people off in devious ways.
                Some ancestors of democracy and the enlightenment cried “Liberty or Death”, others, “Better to die on your feet than live on your knees” and today we seem to all shiver in our shoes when someone says they are offended by a general comment that was shorthand illustrating a broader point because we are threatened with the fear of incurring the wrath of the very status-quo-puppeteers we dared to criticise in the first place.
                Beware the thought police vetting your opinions for “offense”.n

    1. In response to “Curious”, i wonder if the APS and AHPRA can demonstrate that Clinicsl psychologists produce better outcomes to justify the higher Medicare rebate their clients can claim. Can they demonstrate that a psychologist with clinical qualifications and little experience produces better outcomes than a registered psychologist with a decade or two or three of clinical experience?

      1. In response to “Wondering”, anyone explain why generalist psychologists with decades of experience insist they should only be compared to newly graduated Clinical psychologists? Surely it would be fairer to compare generalist and Clinical psychologists with a decade or two or three of clinical experience, or early career psychologists from clinical and 4+2 streams?

        1. J Dwyer – the comparison is being made due to the fact that as it stands currently, newly graduated Clinical Psychologists are automatically deemed more experienced/qualified by virtue of their training in respect to Medicare Rebates and Title. You may have read elsewhere that Centrelink and other Government Agencies no longer accept reports from Generalists – despite decades of experience, in favour of a report from a Clinical Psychologist – irrespective of experience.
          But you raise a valid point – it would be interesting to see if there are truly quantifiable differences in quality of practice after a certain amount of practical experience between the cohorts – this would answer a lot of questions, wouldn’t it?

          1. Actually the literature tends to show that our effectiveness tends to either reduce or remain stable as we gain experience with new graduates generally getting the best outcomes. The idea that a psychologist is better because of their 20/30 years of experience is currently not supported by research.

            1. I beg to differ. If your skills don’t increase with experience then you haven’t been continuing to educate yourself! I have changed the way I practice over the years by continuing education and evaluating my efficacy.

            2. Hi Curious, I am curious to know what literature you refer to and what dimensions of effectiveness were measured in correlation (?) to the age factor. Was it a steady decline or a rise and dip?
              If it was an inverted bell, where was the median high point? What was the N and population sampled, power, reliability and validity andwhat other factors were controlled for? (I am a conscientious, aging, 4+2 Consulting Psychologist with an edjamakation. Point taken?)

                1. Thanks for the reference, Curious.

                  Unfortunately it’s hardly a representative study, based on only 170 practitioners all working within the same facility, which happened to be the University Counselling Centre and clients were University students.

                  There are several limitations to this particular study; including length of experience (average 5 years), lack of diversity of practice environments (all therapists worked within the same facility), small sample size (N=170), no information about the proportion of trainees to fully qualified staff, just to name a few.

                  The authors themselves acknowledged that the Therapists in the study only required 24 hours of professional development activities every 2 years to maintain their license and listed a limitation of the study to be the fact that it was unknown the effort therapists in the study had utilised to improve outcomes, including their training, use of supervision and continuing education.

                  Their study utilised Doctorate level trainees and therapists and found significantly below average effect sizes when compared to known averages, which suggests that potentially the therapists sampled may not have been particularly skilled to begin with, despite their training.

                  The authors concluded that other factors, such as patient variables and therapeutic relationship quality are likely stronger contributors to outcomes than therapist experience.

                  It still sounds to me like there is no accepted evidence that the current qualifications based hierarchy has any scientific merit at all.

                  1. I am pleased we can have an intelligent conversation about the dearth of empirical evidence supporting the misinformation driving the apartheid within the profession of Psychology in Australia.

                    “Curious” The article you referenced says, “…therapist effects in these data were small overall (explaining only approximately 1% of variance in patient outcomes) and considerably smaller than average (see Baldwin & Imel, 2013), suggesting that other factors (e.g., patient variables, relationship factors; Bohart & Wade, 2013; Norcross,2011; Orlinsky, Rønnestad, & Willutzki, 2004) are likely stronger contributors to outcome than therapist experience. The small observed effects should thus be understood in the broader context of known therapy ingredients.” Goldberg et al 2015 [for full article see link provided in above comment by “Curious”]

                    Thanks “Curious”. You’re a real card because, attributing your bold assertion that: “Actually the literature tends to show that our effectiveness tends to either reduce or remain stable as we gain experience with new graduates generally getting the best outcomes,” to the research by Goldberg in the way you did, infacts supports the point we are making that impulsively jumping to premature conclusions of group superiority based on limited experience or limited empirical data (empirical meaning sensed experience), is a real and dangerous thing for our profession. A more experienced and discerning psychologist might be cautious of making such bold assertions based on the admittedly limited and confounded research in Goldberg. I suggest we all remind ourselves of the need to be warey of prematurely jumping to conclusions based on limited information. (Can someone do a study on age and experience as predictors of prudence and discernment? Or is that just common sense?)

                    Goldberg et al (2015) conjectured: “As time progressed, these therapists may have received fewer training experiences and increased caseloads that included more difficult patients, resulting in poorer outcomes even if their skill level was improving.”

                    And:

                    “As part of expertise in psychotherapy is dealing with a range of
                    patient severity, it is not possible to evaluate the development of
                    this kind of expertise in the current sample nor could the development of this kind of expertise be reflected in the results.” (Goldberg et al. 2015)

                    I, however am grateful for you “Curious” sharing the well reported and discussed Goldberg et al (2015) research and providing the opportunity for a serious intercolleagiate discussion of an important issue. And I am pleased we can have some more input into the possibility of an intelligent conversation about the dearth of evidence supporting the misinformation driving the disunity, apartheid and divisions within the profession of Psychology in Australia .

                    What sort of ‘psychotherapists’ were they in the Goldberg et. al, American study of student counsellors quoted by ‘Curious’? Will someone please fund some relevant local studies of Australian populations of psychologists to tease out the effectiveness of various types of psychological therapists providing mental health treatment?

                    There were many limitations to the Golberg study recognised by the authors in the very discussion of the study itself. Including that the sample included trainees and older therapists with a student clientele with average age of around 22.6 years; the trainees and earlier career counsellors probably had smaller case loads and more supervision; the working environment may not have the professional development, reflective practice and reasonable caseload factors that would better facilitate progress in therapeutic prowess; Increased case loads with length of service may contribute to burnout and poorer outcomes regardless of therapist’s ability; Nearly 40 percent of therapists in the study did improve over time; the average overall outcome decline was tiny and the N=170 was too low. The significance and generalisability of the findings is extremely limitted, as the authors themself are questioning it’s merits and suggesting further research to clarify what other factors could account for the significantly scattered variation of therapist outcome trajectory and the very slight decline in aggregated average outcome trajectory .

                    It is good to read some balanced musings by the authors of the research which suggest that these results must be taken with a huge grain of salt.

                    For example the study reports on page 8 and 9: ‘The small decline over time should be considered in the context of the random effect indicating that there was significant variation in the therapists’ trajectories over time. ‘

                    Goldberg et. al. go on to state: ‘Curiously, the results of the present study contrast with clinician self-reported experience. In a large, 20-year, multinational study of over 4,000 therapists, Orlinsky and Rønnestad (2005) found that the majority of practitioners experience themselves as developing professionally over the course of their careers. In particular, therapists with 15 or more years in practice were significantly “more likely than their juniors to experience work with patients as an effective practice, were less likely to have a disengaged practice, and only rarely found themselves in a distressing practice” (p. 88). ‘

                    And:
                    ‘… the therapists in the current sample may differ in
                    meaningful ways from those in Orlinsky and Rønnestad’s (2005)
                    sample, which included highly experienced therapists working in
                    more diverse settings (e.g., independent practice). It may well be
                    that the therapists in Orlinsky and Rønnestad’s sample may have
                    had a very different experience than the therapists in the present sample, and may have used that experience to improve over time.

                    ‘… One reason why we may have failed to detect improvements in
                    outcomes in our sample overall (despite indication that some
                    therapists did improve across time) could be due to assessing only the quantity of experience, with no measure of the quality of experience…’

                    On quality of experience including professional development opportunities: ‘To be effective, efforts must be “focused, programmatic, carried out over extended periods of time, guided by analyses of level of expertise reached, identification of errors, and procedures directed at eliminating errors” (Horn & Masunaga, 2006, p. 601).’

                    A problem with sampling populations of student counsellors to test the theory that psychotherapists in general [or in our case psychology clinicians] improve with age is that, according to Goldberg: ‘… The conditions necessary for improvement are typically not present for therapists in practice settings such as the one in the present study (Tracey et al., 2014), but some therapists may engage in such practice (Chow et al., 2015).

                    ‘… Future work would clearly need to evaluate … efforts to improve outcomes, ideally in adequately large samples of both patients and therapists. Further, it may be fruitful to examine what personal, professional, or caseload differences differentiate the therapists who do show improvements over time from those who fail to improve …One therapist variable to examine in a more fine grained way than was possible in the current study is therapists’ level of prior experience—it may be that the trajectory of change in outcomes across time varies depending on experience level (although the nonsignificant interactions we report between time or cases and proxies for prior experience would suggest this is not the case).
                    Likewise, it may be valuable to more fully understand why some
                    clinicians show decreased outcomes over time. Professional burnout, a long noted liability in the helping professions (Raquepaw & Miller, 1989; Skovholt & Trotter-Mathison, 2011) may be worth examining.
                    Goldberg et al critiquing their own study state that:
                    ‘There are a number of limitations to the present study. First, the sample of therapists was heterogeneous, including practicum students, interns, postdoctoral therapists, and licensed therapists. The more novice therapists received supervision, had reduced caseloads, and may have received other support. As time progressed, these therapists may have received fewer training experiences and increased caseloads that included more difficult patients, resulting in poorer outcomes even if their skill level was improving. However, controlling for initial severity (i.e., patient difficulties), removing novice therapists (viz., those with less than 1 year of data, primarily practicum and predoctoral interns), and examining the interaction between proxies for therapists’ prior experience and time or cases did not change the results. A second limitation is that even though this is the longest longitudinal study of experience, the range of experience (viz., from 0.44 years to 17.93 years, with a mean of 4.73 years) was restricted. Skovholt, Rønnestad, and Jennings (1997) asserted that it takes 15 years on average to develop an internalized style, which according to some is an aspect of expertise. Third, outcome was the only indicator of skill development, and one could claim that particular skill domains should be the focus instead (see Shanteau & Weiss, 2014). However, the attempt to establish that rated competence, for instance, is related to outcome has been difficult (e.g., Branson, Shafran, & Myles, 2015; Webb, Derubeis, & Barber, 2010), and therefore we chose to focus on outcomes. Relatedly, no single standardized treatment was provided to patients, and thus it is not clear how therapist skill (as it relates to the delivery of a specific intervention) could be operationalized. Fourth, the amount of effort that therapists used to improve outcomes, including training, supervision, and continuing education, was largely unknown. Indeed, it may be that the quality of experience (that is, experience marked by training more likely to impact outcomes, perhaps through the inclusion of deliberate practice of specific therapy skills) proves to be a better predictor of outcomes than the mere quantity of experience measured in the present study. Fifth, while patient diagnosis was largely unknown, the setting from which these data were drawn (i.e., university counseling center) rarely includes patients with more severe mental illnesses (these illnesses can interfere with gaining admission to or maintaining enrollment at the university), although patients with considerable distress are nonetheless increasingly found in counseling center samples (Benton, Robertson, Tseng, Newton, & Benton, 2003; Erdur-Baker, Aberson, Barrow, & Draper, 2006).

                    ‘As part of expertise in psychotherapy is dealing with a range of
                    patient severity, it is not possible to evaluate the development of
                    this kind of expertise in the current sample nor could the development of this kind of expertise be reflected in the results. Relatedly, patients’ average age (i.e., 22.60 years) and the relatively brief courses of therapy on average, while typical of counseling center populations, may not generalize to other settings (e.g., community clinics). A replication of the current findings in a noncounseling center sample (perhaps even a sample using standardized treatments targeted to a specific disorder) would be worthwhile. Sixth, although early termination was used as a proxy for dropout (and, indeed, therapists were shown to improve as years of experience accumulated), it is likely that considerablevdropout was not captured in this way. A future study would do well to examine whether rates of mutual termination increase as therapists become more experienced, regardless of when in the course of therapy termination occurs. Last, therapist effects in these data were small overall (explaining only approximately 1% of variance
                    in patient outcomes) and considerably smaller than average (see
                    Baldwin & Imel, 2013), suggesting that other factors (e.g., patient
                    variables, relationship factors; Bohart & Wade, 2013; Norcross,
                    2011; Orlinsky, Rønnestad, & Willutzki, 2004) are likely stronger
                    contributors to outcome than therapist experience. The small observed effects should thus be understood in the broader context of known therapy ingredients.
                    References [see link provided in above comment by “Curious”]
                    Article in Journal of Counseling Psychology © 2016 American Psychological Association
                    2016, Vol. 63, No. 1, 1–11 0
                    Article Received August 16, 2015
                    Revision received October 21, 2015
                    Accepted October 21, 2015

                    1. Gregory I do not thank you for your offensive reply. I am very keen for intelligent and unbiased discussions of the literature to take place and am really happy to see people going and reading research about outcomes for themselves. I am, however, offended by your personal attack and assumption that I am inexperienced. You also seem to incorrectly assume that my goal is for an adversarial debate, when I am really hoping for a meeting of minds to expand all of our knowledge with the help of those who have read the literature.

                      I am not young, nor inexperienced and I’m sorry if I blow your mind but I am actually a generally registered psychologist. I am also close to completing a PhD and so am perfectly capable of being ‘discerning’ when reading research for myself and do not require someone to copy and paste sections of an article for me to understand it.

                      It is these kinds of personal attacks that creates an ‘us and them’ mentality when we really could all attempt to gain some shared understanding for the benefit of everyone. I hope that you treat your clients with a little more courtesy and good will.

                      To everyone else, please keep reading and commenting and adding other articles that we may not have seen. It’s a really important issue within our profession right now and it would be great if everyone could support their arguments from a place of knowledge and understanding rather than emotion.

                  2. Hi. I’m pleased to see that people are going and reading this research for themselves. Of course there are limitations as with all research in psychology, but to date this is the closest to real evidence that we have and it does throw doubt to the idea that experience = better outcomes. If you are abreast of the extensive research into ‘common factors’ you would recognise that this is just another paper supporting the fact that the biggest predictors of outcome are actually things like patient factors. However, that doesn’t mean that we shouldn’t continue to look into what we can do to improve that factors that we can control.

                    I didn’t provide this paper to support the ‘current hierarchy’ but in response to the many people on this forum who seem to believe that their years of experience mean they are better than new graduates (of either postgrad programs or 4+2). So far this is unsupported in the literature.

                    1. Beware the impulsive reactionary response. Let that be a lesson to us all. No offense intended. Just making a point. Good science is great. Cherry picking vague conclusions from limmited data sets and making them into clubs to win a point in an argument with is a travesty to good science. What we can take from all this is that, length of experience is less important than quality of experience, but it is still important. And “people who live in glass houses shouldn’t throw stones… unless they toughen up.” I treat my clients with the utmost respect and I also treat my colleagues with respect although the very Association that supposedly advocates for me doesn’t respect my status.
                      One adage I enjoy contemplating, although not an absolute motto for me, is, “Comfort the disturbed and disturb the comfortable.” There are alot of comfortable misconceptions about that a recent graduate endorsed in Clinical psychology should be employed over an experienced trench hardened psychologists, thereby obviating the necessity for employers to discern their choices to employ people based on their actual merits, not some fantastical invented “brahmanesque” caste system. There are alot of psychologists disturbed by the reality of their shrinking prospects to provide a service because of the apartheid movement white-anting our status and credibility.
                      It is for my clients and everybody’s clients and the public in general that I have stimulated this debate over factors of experience etc.
                      “Curious” (curious name :)) if you say that the research tends to support something, and then provide some limitted research evidence with low power when asked, then have the research reviewed and critiqued by peers using it’s own internal criticisms of its obvious flaws that render it virtually insignificant, please don’t expect to be applauded for undermining the uncomfortable and disturbed ‘generalists’ arguments.
                      But thanks for catalyzing the debate further. And please try to lighten up a little brother/sister… we are all friends here. We are not all academics and may write in colourful ways, but it is not meant to be malicious. Opinions are being expressed in general terms from a range of valid bases. Please don’t take it too personally (I presume that Curious is not your real name?) or get too personal (Gregory Goodluck is my real name).
                      And please look after yourself well if you are in the ‘belly of the beast’ of academia doing a PhD write up. I hear they can be gruelling and life threatening experiences, e.g. updating the references and making sure the research is still current etc. on the shifting sands of scientific enquiry. Respect to you for that! (Interested to know what your thesis is about). I hope your work/life balance is in place. In this wide brown land you will encounter all sorts of characters from all corners of the country and when you graduate, “Oh the places you will go!” (a cool Dr Suess book that might lighten things when you get some down time. But what is more pertanent to our current situation is “The Sneetches” by the same good Dr. Suess. We all want a star on our belly, except those of us who want them removed… Let’s all just chillax a little bit.
                      Goodnight

            3. In response to Curious –

              “Unless new data suddenly comes to light (which would be amazing) the FACT remains that there is no data to prove that clinicals get better outcomes but there is also no data to prove that there is equivalence between generals and clinicals.”

              In light of the absence of evidence for either argument; how then can the current hierarchy be justified – particularly when it directly and adversely impacts the general public, but also detrimentally affects the professional integrity and financial security of thousands of practitioners?

              If the only available evidence truly indicates that efficacy reduces or stabilises with years of practice; aren’t those even more compelling grounds to conclude the current hierarchy is artificial and damaging?

            4. Hi Curious,

              Literature on the development of expertise, while confirming its not simply about time in practice, endeavours to examine what we do in that time in practice to get a better idea of what may either help or hinder our development as practitioners over time.

              From my understanding, the general research question being asked on the development of expertise is more around things like; what can we do as practitioners to ensure we develop and grow over the years. What makes us better practitioners over the years, what might cause us to stagnate or worse still, what might encourage us to lose our chops as practitioners over the years.

              As you flag, simple time in service has been shown to be no guarantee of improvement. Things that do show an influence are things like ongoing professional development, the skilled application of reflective practice and regular supervision (e.g., peer dialogue etc re: case management are effective ways to cover the supervision base).

              Looking further afield beyond the realm of just psychologists in practice, there is a lot of research out there comparing differences between novices and experts across, for example, differences in the explicit and implicit schema of the practice environment that they interact with, difference in the accuracy of interpreting environmental cues contained within the practice environment, difference in capacity to determine between relevant and irrelevant cues they may need to respond to in the practice environment and capacity to apply effective behavioural responses to the environmental cues they deem as relevant. research does tend to show a difference between beginner and longer servicing practitioners with these sorts of things. All really interesting stuff.

              Kind Regards
              Clive Jones PhD MAPS.

        2. “anyone explain why generalist psychologists with decades of experience insist they should only be compared to newly graduated Clinical psychologists?”

          The actual point that Wondering was making is not what you are asking … it is that the two-tier system assumes this as it rebates newly graduated clinical psychs at a considerably higher level than generalists with decades of experience.

        3. This is about the rebates again. A clinical psych can get the tier one rebate whether they got their endorsement yesterday or 40 years ago. A non-clinical psych cannot get the tier one rebate no matter how long they have been registered. No-one (as far as I’m aware) is arguing that a clinical psychologist with decades of experience is inferior, however the rebates imply that a non-clinical psychologist is inferior, even if they have decades of experience, even if they have plenty of training, and even if they have endorsement in other areas. The comparisons between experienced non-clinical psychologists and newly endorsed clinical psychologists keep occurring because this is the aspect that becomes upsetting to an experienced non-clinical psychologist.

Leave a Reply