Selection of students to medical schools is a
matter of world-wide discussion and debate [1-3]. A major reason for
this is the gap between medical training and societal health needs.
India is at the threshold of a national-level policy change in the
procedure of admissions to medical courses. A single examination at the
national level has been introduced for selecting students to
undergraduate (UG) and postgraduate (PG) medical courses. There is a
raging debate, involving educationists, policy-makers and the judiciary,
on what is most suitable in terms of logistics, transparency,
effectiveness, fitness to purpose, and acceptability. A consolidated
opinion of medical educationists is as yet unexpressed.
Only when there is a clear purpose to the selection,
linked to orientation, can the most appropriate process for admission to
medical schools be evolved. The obvious purpose of selection to
undergraduate courses is the identification of the most suitable
candidates for a future physician role. The selection process must also
weed out the applicants who may harm the community. The purpose of
selection to postgraduate courses is to identify the most appropriate
amongst the medical graduates, who have the ability to carry out
specialty practice.
Working backwards, a good admission process must
involve procedures and criteria that sub serve both, the desired purpose
and outcome. It is therefore crucial to know which selection procedures
contribute, and to what extent, towards achieving them.
What Works and to What Extent?
Motivation, aptitude and ability are the three
pillars that enable a person to perform well in any profession.
‘Motivation’ is assumed when a student applies for admission to a
medical school. ‘Aptitude’ pertains to natural flair and plays a key
role in deciding how comfortable the person will be in functioning as a
physician. ‘Ability’ decides whether the student has the potential to go
through the academic rigors of medical training, and subsequently
fulfill the demanding roles of a physician. Therefore, the optimal
admission process must consider both ‘Academic’ and ‘Non-Academic’
criteria. Table I compares the processes followed for
admission to undergraduate medical courses in India with that of some
other countries.
Academic evaluation for selection: Academic
evaluation is universally applied for selecting students to medical
courses (Table I). Academic performance prior to admission
to the medical school has a moderate influence on performance during
medical school explaining about 13% of the variance [4,5]. This suggests
that prior academic performance is not adequate on its own for
predicting performance in medical school or in later professional life.
For this reason, countries such as the United States, United Kingdom,
and Australia utilize a combination of end-of school examination scores,
a medical admission/entrance test score, and also additional criteria
for the suitability for admission to undergraduate courses
[1,4-7,11-13].
TABLE I Processes Followed for Admission to Undergraduate Medical Courses in a Few Countries
Parameter |
USA and Canada [3] |
UK [4-7] |
India |
Basis of Admission
|
A combination of cognitive and
|
A combination of cognitive |
Usually a single cognitive |
decisions |
non-cognitive methods |
and non-cognitive methods |
method* |
Eligibility |
After three years of graduate
|
Soon after school education |
Soon after school education |
|
college education |
|
|
Methods used
|
Medical College Aptitude
|
End-of-school scores |
An MCQ based written |
|
Test (MCAT) scores
|
(A Level) supplemented by |
entrance test across almost |
|
supplemented by undergraduate
|
personal interviews or letters |
all states* |
|
Grade Point Average (uGPA),
|
from referees for assessing |
|
|
an interview or multiple mini
|
non-cognitive attributes, and |
|
|
interviews (MMI), personal
|
aptitude testing by the UK |
|
|
statements, and letters of reference |
Clinical Aptitude Test (UKCAT) |
|
#Nearly 60,000 students appeared for MCAT in
2015; Approximately 4.75 Lakh students registered for the
re-introduced NEET-UG in India in 2016; * Exceptions: In the
state of Tamil Nadu, 85% medical admissions are based on class
12 marks in science subjects [8]; The Armed Forces Medical
College, Pune, and the Christian Medical College, Vellore,
shortlist applicants on the basis of cognitive test scores (the
All India entrance examination); the former institution then
supplements the scores with a separate Test of English Language,
Comprehension, Logic and Reasoning (ToELR), a psychological
test, and an interview, while the latter adds on scores based on
an online aptitude assessment test [9,10]. |
In India, academic performance in the end-of-school
examination scores is used almost exclusively to determine eligibility
for appearing in the medical entrance examination; it is not used for
admission decisions save in the state of Tamil Nadu [8]. For admission
to PG courses, the term ‘prior academic performance’ would refer to the
performance during UG medical training. While the USA utilizes the
scores of United States Medical Licensure Examination (USMLE) in
selection decisions, no such procedure is followed in India and many
other countries.
Non-Academic evaluation for selection: It is
ironic that while selecting entrants to medical schools, the greatest
emphasis is on academic performance/ potential whereas at the user end,
(when visiting a doctor for consultation), the patient can discern only
the professional, ethical and interpersonal behavior of the physician. A
higher compliance with treatment, better satisfaction and less
litigation has been found to be associated with such soft skills of the
physician [14-16]. The intention in bringing these facts to the fore is
not to de-emphasize academic and cognitive attributes, but to shine
light on soft skills/non-cognitive attributes.
Non-academic evaluation can cover a wide range of
attitudinal and behavioral characteristics, and skills such as
interpersonal communication, professionalism, ethical reasoning,
team-working skills and stress coping ability [16,17]. All of these,
being complex constructs of many human qualities, are not amenable to
easy evaluation by a single test. Also, it is not feasible to evaluate
for all of these; thus, for selection purposes, only the most important
of these may be assessed [17]. The popular methods utilized are aptitude
tests, personal interviews, personal statements or essays by applicants,
Multiple mini interviews (MMI), letters of reference, Situational
judgment tests (SJT), and other tests such as personality assessment and
emotional intelligence [4,16,17]. A few key aspects of some of the
methods are discussed below.
Personal statements and letters of reference, though
extensively used, have been found to lack reliability, and have limited
validity and overall utility as a predictor of future performance [3,4].
They are prone to contamination by way of plagiarism and third party
inputs. Also, they are resource intensive, as they have to be
individually assessed by subject experts. Reference letters have also
failed as predictors of future performance perhaps because of inherent
bias on the part of the self-selected referees [3].
The interview is a versatile though resource- and
time-intensive method. The way it is conducted, its content, and the
interviewer all have a bearing on its reliability and validity.
Selection interviews are best conducted in a structured manner, with
standardized questions, via a trained panel of interviewers and
by using validated scoring criteria [4,11,18]. Increasing the number of
assessors can minimize interviewer bias. Multiple mini interviews (MMI)
offer a further improvement; the predictive validity of MMI is found to
be higher (0.4-0.45) than that of traditional interviews (<0.2) [4,19].
Aptitude may be tested separately or incorporated as
a separately evaluated and weighted section in the cognitive test. The
Medical College Aptitude Test (MCAT) in the USA and the United Kingdom
Clinical Aptitude Test (UKCAT) in the UK are some examples. While the
UKCAT is predominantly an aptitude test, the MCAT, in addition to
aptitude, also tests biological and physical science knowledge base,
writing skills, and problem solving skills. The evidence for predictive
validity of aptitude testing is positive but widely variable (0.14-0.6)
with respect to performance in medical school [4,13]. In India, testing
for aptitude is non-existent at present except for Christian Medical
College, Vellore where an online aptitude test is conducted [10].
Need for a Change
Lessons from other countries: Most universities
and medical schools in USA and Canada utilize a combination of methods
for ranking and admission decisions [3]. Three years of graduate college
education is a prerequisite for entry into medical school. The
undergraduate Grade Point Average (uGPA) score from college education is
included, and usually supplemented with MCAT, along with additional
methods such as interviews, personal statements, and letters of
reference. McMaster University, Canada, is credited with the
introduction of MMI that has remarkably improved upon the personal
interviews, and has replaced traditional interviews in many universities
[19].
It is important to note that the USA does not use a
single high stakes examination such as MCAT (for UG admissions) or USMLE
(for PG admissions). It is a common misconception that scores at such
standardized tests are the major determinants of acceptance. In addition
to these being used as one of several measures of suitability for
admissions, they are utilized to compare applicants from diverse
Universities that have different degrees of rigor in how they award
grades that eventually determine the GPA. The MCAT and uGPA are used
less as a ranking tool and more to determine who should be invited for
personal interviews. Data from the written examinations is integrated
with several other modes of assessment for the admission decisions
including selection interviews. Interviews do not test content knowledge
but critical thinking and communication skill. Due attention is also
paid to the life experiences of the applicants and their ability to
evaluate such experiences in their essays. The weight given to each of
the above components varies widely in different universities.
Similarly, countries such as the UK, Australia, and
the Netherlands use a combination of prior academic performance, some
form of cognitive testing, aptitude testing and additional methods of
non-cognitive assessment for deciding suitability of the candidates for
admission to medical schools [4-7,11,16,17].
Lessons from the past: Initially, the
end-of-school i.e. higher secondary examination scores alone were
used for creating a rank list and selecting students to medical schools;
no other input was considered necessary. Variations in standards of
school-leaving examinations and unfair practices creeping into the
selection process prompted the introduction of entrance examinations as
a common platform for entry to medical schools. However, there were
multiple examinations, some conducted by individual States, others by
institutions, and one national level examination, The All India Pre
Medical Test (AIPMT). The State examination was used to fill 85% of the
medical seats in a given State, while the remaining 15% seats were
filled through the AIPMT; aspirants ended up preparing for and giving
several entrance examinations, and as a result often traveled across the
country multiple times in the admission year.
Single, nationwide entrance examination: In order
to improve selection processes, the Medical Council of India (MCI), in
2009, proposed doing away with entrance examinations conducted by States
and Institutions, to be replaced with a single national level
examination from 2013 onwards. The Government of India passed the
proposal in December 2010 [20,21]. The proposed examination was called
the National Eligibility cum Entrance Test [NEET] for admission to
undergraduate [NEET-UG] and postgraduate courses [NEET-PG]. However, it
was widely challenged in the court and the Supreme Court of India passed
a judgment in July 2013 quashing NEET [22,23]. Neither the reasons for
which NEET was challenged, nor the basis on which it was struck down,
had much to do with the educational utility of the examination [24,25].
In a surprising turn of events, the Supreme Court of India recalled its
judgment on 11 April 2016 for reconsi-deration and finally NEET was
reintroduced this year.
The proposing and conducting authorities debated
largely about administration logistics, cost, the value of a common
standard of examination nationwide, and containment of corrupt
practices; medical educationists, on the other hand, deliberated the
utility of a single entrance examination - a debate that still stays
sans consensus [24-27]. The NEET may seem like the end of the
discussion for students, parents and conducting authorities, but for
medical educationists it is only the beginning of a mammoth challenge.
It is a challenge to decide the appropriate modality, content, the
assessment tool(s), the duration of such high-stakes examination, weight
given to various aspects, and how to use the scores for making a ranking
decision.
Pardeshi, et al. [27], explored the thoughts
of the most important stakeholders – the students - when NEET was first
announced. Though they focused on the NEET-PG, it is interesting to note
that only about half the interns felt the need for a single entrance
examination. This is ironic since one of the bases for introducing NEET
was student convenience in appearing for only a single examination. More
interesting were the reasons that students shared for not wanting a
single examination: they felt that, being their only chance that year,
it would be a single high-stakes opportunity; if one was sick on that
day or, unable to appear for other legitimate reasons, there would be no
second chance or alternative. They also voiced concerns about having a
single uniform examination that did not take into account variation in
the quality of training in different States. It is worth mentioning that
with re-introduction of NEET in 2016, the NEET-UG examination has been
scheduled on two dates, and the NEET-PG examination is scheduled on nine
dates. This now allows reasonable flexibility to the candidates.
Single Entrance Examination for Selection Decisions –
A Critique
While NEET appears to be a solution for many ills in
the existing selection procedures, it also creates problems relating to
this uni-dimensional approach to admissions for medical colleges.
Nonetheless, for medical educators, NEET need not be just a challenge
but also an opportunity to think about all aspects of medical education
in India.
Benefits of a single admission test such as NEET
• Brings down the cost and efforts for students
• Resource efficient
• Potential to curtail financial malpractices in
admission
• Seemingly a ‘standardized’ and ‘objectivized’
national level platform
The likely limitations of using a single high stakes
examination for admission to PG courses are summarized in Table
II along with suggestions to counterbalance the limitations.
Some of the major limitations deserve further analysis and discussion:
TABLE II Single Admission Test for PG
Courses: Limitations and Suggestions for Improvement
|
Limitations
|
Suggestions for making selection more valid |
• Raises the stakes on a single examination
with negative educational impact
• Limits options for students in case of
non-selection [21].
• Suboptimal assessment of knowledge
• Students likely to indulge in examination
oriented learning for ‘cracking’ MCQs rather than acquiring
clinical skills.
• Does not assess clinical or soft skills,
essential for further medical training.
• No testing for ethical judgment,
professionalism, teamwork etc.
• Students may skip some content with smaller
representation (e.g. Anesthesia, Psychiatry etc.)
• Performance depends on many factors in addition to
knowledge, thus bringing ‘construct irrelevance’.
|
• Give credit/ weight age to performance in
certifying courses, as a qualifying criterion e.g. Higher
Secondary examination for selection to UG courses and MBBS for
selection to PG courses.
• Stop the drift towards MCQ oriented
learning. A robust system of formative, on-going, in-training
assessment (Internal Assessment) as well as strengthening of the
certifying assessment will retain the focus of students on
learning contextually, and acquiring clinical and soft skills
towards becoming a competent physician.
• Knowledge assessment can be improved by
changing the format to a longer examination with MCQs that are
context based and test clinical reasoning.
• Other tools for testing higher order thinking skills,
aptitude and ethical judgment may be included.
|
Prior academic performance is ignored: Prior
academic performance in the form of school-leaving examination scores
have been reduced to an eligibility criterion and that too at a meager
cut-off score of 50% (even lower for accommodating special categories of
applicants as a welfare effort). The adverse educational impact of this
type of assessment can be readily seen - school students have shifted
their focus from school studies and concept building to preparing for
the Multiple Choice Questions (MCQ) of the medical entrance
examinations. To the students it makes sense since the HSE scores are
devalued. This devaluation of prior academic performance has weighty
consequences. Studies from around the globe, including a relatively
recent one from Delhi, demonstrate that past performance can predict
performance in medical school [4,5,26]. A similar pattern of entrance
examinations exists in selection to Medical Postgraduate courses (PG).
The performance in MBBS – which is assessed by 56 examiners - is not
given any importance, and students spend their internship preparing for
the MCQs that comprise the PG entrance examination. Acquiring competency
to practice as a physician is not the focus of UG medical students, nor
is it assessed for making admissions decisions to PG courses. This
unintended consequence of not adopting systems approach deserves debate
and alleviation.
Students become MCQ solvers instead of exploratory
learners: NEET has an MCQ-based format and is a knowledge test,
whereas the purpose of a selection/admission test should be to assess
the overall suitability for further medical training and not just the
level of knowledge. This diverts the students to "selection examination"
oriented learning, focused solely on solving MCQs. Further, such a test
has the potential of discouraging students from exploring other learning
experiences, thus distorting their learning priorities [28]. This
misalignment in purpose and action needs to be addressed and redressed.
The 3 hour – 200 question format of entrance tests
fails to test higher cognition: Traditionally, most entrance
examinations follow the ‘3 hour - 200 question’ format, leaving little
option for paper setters to go beyond assessing recall of knowledge.
Since it becomes difficult to stratify thousands of students on the
basis of recall type questions, the examiners resort to adding some
‘difficult’ questions about rare diseases or single case report, that
have no relevance to the objectives of the entrance examination [29].
Although NEET being a computer-based test provides a unique opportunity
for incorporating videos, recorded patient encounters and other methods
to even test affective domain, the same has not been utilized. Very few
efforts have been made to scientifically understand the impact of such
large-scale examinations. We allude to the earlier study from Delhi that
demonstrated that the entrance examination scores do not predict
performance in medical school [26]. This is in agreement with the
findings from other countries; no similar study could be identified
pertaining to PG entrance examinations. Clearly, more research is
needed, along with changes in MCQs so that they test higher order
thinking rather than recall, as is the case with most questions in USMLE
examinations [30]
Clinical skills are not assessed: The proposed
PG-NEET does not test clinical competence, yet the implication is that
the applicant has the competence to start PG studies. An improvement in
the end-of-course MBBS examination as well as the in-training formative
assessment and feedback, is perhaps the key to justifying this
presumption.
In-training formative assessment has been regarded
synonymous with Internal Assessment (IA) in the Graduate Medical
Education Regulations 1997 (GMER), though there is a fine difference
[31,32]. It has the potential of redirecting students from
examination-oriented learning towards in-depth, conceptual, contextual
and experiential learning. Much flexibility has been provided in the
regulations for planning and implementing IA in Indian medical schools
and every medical teacher has the potential of making the best of it.
Hence this aspect is discussed in some depth.
Improvement in Internal Assessment to offset the
undesirable effect of single PG admission test on student learning
The basic tenet of understanding the utility of IA in
improving selection to PG courses lies in the fact that contrary to the
obvious, UG and PG medical training must be viewed as a learning
continuum rather than as two different courses separated by the
selection examination. The learning process and competencies mastered
during UG training are an important foundation for undergoing further
specialty training. Its importance is well elucidated by experts in a
recent article wherein they write, "A formative focus in
Undergraduate Medical Education better prepares the students for
residency training…." [33].
The essence of IA lies in its ‘formative’ role for
monitoring and positively influencing the process of learning by way of
timely feedback during the course. Further, the competencies that can be
assessed during training by direct observation at workplace such as
communication, professionalism, procedural skills, etc. are not amenable
to assessment in the final end-of-training examination. Hence, the
educational information provided by the IA and final examination
complement each other rather than merely being two numerical scores.
This requires careful drafting of a longitudinal assessment program that
covers the entire period of study [34]. The GMER 1997 of the MCI made a
beginning in this regard by making it mandatory to pass in IA to be
eligible for the final university examination and also according
weightage (20% at present) to the IA towards final results [32].
However, the full potential and formative function of IA remains largely
untapped in our country [31]. In most institutions, it is reduced to
sporadic assessments during MBBS course rather than deliberately linked
assessments of developmental attainment of competencies. An effective
internal assessment must be based on multiple observations made by
multiple examiners over a period of time and, preferably, all faculty
members in the department should be involved [31,34]. This can also
compensate for any individual examiner’s bias.
In the USA, an ongoing comprehensive, multi-modal,
in-training assessment is done over the four years of undergraduate
training, and these are detailed in a document called the ‘Dean’s
letter’. This is an integral and important part of the application for
PG training, along with USMLE scores, personal statements, reference
letters, and on-campus interviews. The Dean’s letter also includes
previous education/accomplishments (prior to medical school entrance),
family background (if relevant), extracurricular accomplishments, etc.
The idea is to provide a synopsis of personal attributes of the
applicant. In clinical subjects, there is a more extensive write-up that
takes into account narratives provided by attending physicians and
senior residents as well as standardized subject examinations provided
by the National Boards. Most schools end by stating ‘On the basis of the
overall performance we rate this student as Outstanding, Excellent, Very
good, etc. Usually each institution has certain academic criteria for
these adjectives (typically percentiles). Recognizing the utility of the
information provided by this comprehensive document in making selection
decisions for residency positions, the Association of American Medical
Colleges (AAMC) refined it to a standard format referred to as Medical
Student Performance Evaluation (MSPE) [35,36]. Further modifications to
it are now suggested such as the focus on the core competencies, details
on professionalism, more stress on evaluation of clinical clerkships
(clinical postings) [37].
Whether an identical system is appropriate for India
can be debated; it is reasonable to say that, in the USA, in-training
assessment has been accorded importance during planning, implementation
and utilization - not only as a steering force for learning process and
skills acquisition in undergraduate education, but also as a measure of
suitability for admission to PG training; further, a subjective
description of performance in addition to ‘objective’ scores are also
given importance.
Some suggestions for alleviating other limitations of
a single entrance examination:
i. Duration of test: It is well known
that the reliability increases with the testing time. Increasing the
testing time will contribute to building validity as well as
reliability. In addition, increasing the time available per question
will allow inclusion of application oriented and problem solving
questions rather than only recall and recognition questions
ii. Don’t disregard the assessment of
crucial non-cognitive components: A conscious effort must be
made to overcome the tendency to discard the assessment of
components such as communication skills, ethics, professionalism
that are not easily amenable to ‘objective’ assessment methods, but
are sine qua non for good medical practice. We are perhaps
missing out on the merits of subjective assessment by equating it
with bias. While MCQs are labeled as objective, they are not truly
so as the one who designs them does so on subjective thought.
Isolated objective testing can be likened to the story of blind men
describing only parts individually (and perfectly), but no one with
the correct picture. Subjective assessment also permits a better
assessment of soft skills. This could be in the form of an essay,
discussion of a situation for judgment analysis, interview, etc.,
depending on feasibility.
iii. A limitation not discussed further in
this paper but definitely worth a thought and mention, is that a
single high-stakes examination has led to a culture of students
attending expensive preparatory courses and coaching classes. The
financially/socially disadvantaged students may feel themselves to
be at a further disadvantage by way of not being able to afford/
find time for the same. If the examination is designed to largely
test for aptitude, thinking process and application rather than
recall, this may reduce to some extent.
In conclusion, we welcome the move to have a common
national examination in the form of NEET that will help standardization
and uniformity of admission process. However, we propose in this paper,
several other considerations and improvements, if we are to raise the
standard of medical education that is desired by the individual and the
society. It should be a well-planned test conforming to the principles
of assessment as discussed above and subjected to the rigors of
evaluation. Some of the likely drawbacks of a single entrance
examination can be counterbalanced by strengthening the MBBS final
examination, and by making the in-training formative assessment
program/IA of MBBS course more robust. The students can be kept on a
desirable course of learning with acquisition of necessary skills rather
than them drifting to only test- oriented learning.
The concept of golden alignment between curricular
components viz. objectives, teaching methodology and assessment
is well accepted. Gliatto, et al. [28] have rightly pointed out
that a proper balance be maintained between the various curricular
components to provide a working space for innovations in medical
education to make it relevant to the health needs of the society.
However, putting too many stakes on any one component - single
assessment for career trajectories in this case - is likely to take away
any degree of freedom that we have to innovate [28]. They lucidly
express it in the American context as quoted below, and it is easy to
draw parallels to Indian context:
"If we want our assessments to reflect our values and
societal priorities, we need to break free of the self-imposed
constraints of using MCAT and USMLE scores to determine who advances
into medical school and residency"[28].
1. Prideaux D, Roberts C, Eva K, Centeno A, McCrorie
P, McManus C, et al. Assessment for selection for the health care
professions and specialty training: consensus statement and
recommendations from the Ottawa conference. Med Teach. 2011;33:215-23.
2. Powis D. Selecting medical students: An unresolved
challenge. Med Teach. 2015;37:252-60.
3. Siu E, Reiter HI. Overview: what’s worked and what
hasn’t as a guide towards predictive admissions tool development. Adv
Health Sci Educ. 2009;14:759-75.
4. Cleland J, Dowell J, McLachlan J, Nicholson S,
Patterson F. Identifying the best practice in the selection of medical
students (literature review and interview survey). 2012. Available from:
http://www.gmc-uk.org/Identifying_best_
practice_in_the_selection_of_medical_students.pdf_ 51119804.pdf.
Accessed February 02, 2016.
5. Mercer A, Puddey IA. Admission selection criteria
as predictors of outcomes in an undergraduate medical course: A
prospective study. Med Teach, 2011;33:997-1004.
6. Mc Manus IC, Powis DA, Wakeford R, Ferguson E,
James D, Richards P. Learning in practice Intellectual aptitude tests
and A levels for selecting UK school leaver entrants for medical school.
BMJ. 2005;331:555-9.
7. Wright SR, Bradley PM. Has the UK Clinical
Aptitude Test improved medical student selection? Med Educ.
2010;44:1069-76.
8. Selection committee, Directorate of Medical
Education, Government of Tamilnadu. Prospectus for admission to MBBS/
BDS courses 2016-2017 session. Chennai. Government of Tamilnadu; 2016.
Available from:
http://www.tnhealth.org/online_notification/notification/N1605308.pdf.
Accessed September 9, 2016.
9. Armed Forces Medical College, Pune: MBBS
Admissions-2016. Available from: http://afmc.nic.in/PDFfiles/MBBS%202016%20interview%20list.pdf.
Accessed October 12, 2016.
10. Christian Medical College. Revised supplementary
bulletin MBBS admissions 2016. Vellore; 2016. Available from:
http://admissions.cmcvellore.ac.in/linkeddata/uploads/MBBS%20BULLETIN%202016%20Dated%
2015%20Aug%202016.pdf. Accessed September 9, 2016.
11. Wilson IG, Roberts C, Flynn EM, Griffin B. Only
the best: medical student selection in Australia. Med J Austr.
2012;196:357-61. Available from: https://www.mja.com.
au/system/files/issues/196_05_190312/wil11388_fm.pdf. Accessed
February 26, 2016
12. McGaghie WC. Assessing readiness for medical
education. Evolution of the Medical College Admission Test. JAMA.
2002;288:1085-90.
13. Julian ER. Validity of medical college admission
test for predicting medical school performance. Acad Med.
2005;80:910-17.
14. Laidlow A, Hart J. Communication skills: An
essential component of medical curricula. Part I: Assessment of clinical
communication: AMEE Guide No.51. Med Teach. 2011;33:6-8.
15. Tamblyn R, Abrhamowicz M, Dauphinee D, Wenghover
E, Jacques A, Klass D, et al. Physician scores on a national
clinical skills examination as predictors of complaints to Medical
Regulatory Authorities. J Am Med Assoc. 2007;298:993-1001.
16. Urlings-Strop LC, Stegers-Jager KM, Stijnen T,
Themmen APN. Academic and non-academic selection criteria in predicting
medical school performance. Med Teach. 2013;35:497-502.
17. Powis D, Hamilton J, McManus IC. Widening access
by changing the criteria for selecting medical students. Teaching and
Teacher Education. 2007;23:1235-45.
18. Kreiter CD, Yin P, Solow C, Brennan RL.
Investigating the reliability of the medical school admissions
interview. Adv Health Sci Educ. 2004;9:147-59.
19. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An
admissions OSCE: the multiple mini-interview. Med Educ.2004;38:314-26.
20. Government of India. The Gazette of India,
Extraordinary Part III, Section 4. New Delhi. 27 December 2010; 342.
Available from:
http://www.mciindia.org/tools/announcement/2010Dec27_49068_Gazette_
Notification_NEET-UG.pdf. Accessed February 26, 2016.
21. Government of India. The Gazette of India,
Extraordinary Part III, Section 4. New Delhi. 27 February 2012; 41.
Available from:
http://www.mciindia.org/tools/announcement/2012Feb27_62051_Gazette_
Notification _NEET-UG.PDF. Accessed February 26, 2016.
22. Medical Council of India. Final Schedule for All
India Quota (NEET) UG Counseling 2013 (Annexure to letter No.
V.11017/1/2009-MEP-1 dated 24th June 2013). Available from:
http://www.mciindia.org/tools/announcement/2011_FinalCoreSyllabus_NEET-UG/NEET_UG_Counselling.pdf.
Accessed February 26, 2016.
23. Kabir A. Judgement in the Supreme Court of India
(Christian Medical College Vellore and Others versus Union of India and
Others. TC (C) 98 of 2012. Available from:
http://www.mciindia.org/tools/announcement/judgment-neet180713.pdf.
Accessed February 26, 2016.
24. Ananthakrishnan N. Saying no to NEET is certainly
not neat. Natl Med J India. 2013;26:250-51.
25. Singh T. Was it wrong to discard NEET? Natl Med J
India. 2014;27:119-20.
26. Gupta N. Nagpal G, Dhaliwal U. Student
performance during the medical course: Role of pre-admission eligibility
and selection criteria. Natl Med J India. 2013;26:223-6.
27. Pardeshi G. MCI and NEET-PG: Understanding the
point of view of medical graduates. Natl Med J India. 2012;25:314-5.
28. Gliatto P, Leitman M, Muller D. Scylla and
Charybdis: The MCAT, USMLE, and degrees of freedom in undergraduate
medical education. Acad Med. 2016;91:1498-1500.
29. Anand AC. PG entrance for dummies (Are you
looking for a postgraduate seat?). Natl Med J India. 2011;24:38-42.
30. Case SM, Swanson DB. Constructing written test
questions for the basic and clinical sciences. 3rd Ed. Philadelphia.
National Board of Medical Examiners; 2002. Available from:
http://www.nbme.org/PDF/ItemWriting_2003/2003IWGwhole.pdf. Accessed
September 15, 2016.
31. Singh T, Anshu. Internal assessment revisited.
Natl Med J India. 2009;22:82-4.
32. Medical Council of India Regulations on Graduate
Medical Education 1997. Available from: http://www.mciindia.org/Rules
and Regulations/Graduate Medical Education Regulations 1997.
Accessed May 31, 2016.
33. Konopasek L, Norcini J, Krupat E. Focusing on the
formative: building an assessment system aimed at student growth and
development. Acad Med. 2016: 91:1492-7.
34. Singh T, Anshu, Modi JN. The Quarter
Model: A proposed approach for In-training assessment of undergraduate
students in Indian medical schools. Indian Pediatr. 2012;49:871-878.
35. Andlosek KM. Improving the medical student
performance evaluation to facilitate resident selection. Acad Med. 2016:
91:1475-9.
36. Katsufrakis PJ, Uhler TA, Jones LD. The residency
application process: Pursuing improved outcomes through better
understanding of issues. Acad Med. 2016;91:1483-7.
37. Association of American Medical Colleges, Medical
Student Performance Evaluation Task Force. Recommendations for Revising
the Medical Student Performance Evlauation (MSPE). Washington DC.
Association of American Medical Colleges; 2016. Available from:
https://www.aamc.org/download/470400/data/mspe-recommendations.pdf.
Accessed October 8, 2016.