To produce graduates who have acquired measurable skills in critical thinking, researching, and writing about English literature, language, and writing disciplines and have acquired demonstrable breadth of knowledge in the field. While the number of graduates who have entered PhD programs or taken teaching positions at two- and four-year colleges is an objective measure of our success in accomplishing this goal, not all of our students pursue further graduate degrees or post-secondary teaching. That in mind, the department has determined three measurable learning objectives that apply uniformly to all students taking a graduate degree in English from Sam Houston State University: (1) the demonstration of critical thinking, researching, and writing skills, as measured by their class writing; (2) the demonstration of critical thinking and writing skills and breadth of knowledge, as measured by their performance on the written comprehensive examination; and (3) the demonstration of critical thinking skills and breadth of knowledge, as measured by their performance in oral examinations.
Objective
Demonstrating Critical Thinking, Researching, And Writing Skills: Class Writing
English graduate students will demonstrate their abilities as independent critical thinkers, researchers, and writers capable of employing sophisticated skills in written analysis, synthesis, and evaluation of knowledge and of using a professional idiom in making written arguments. The program's success in achieving this objective will be measured by a holistic assessment of graduate class writing.
Indicator
Holistic Assessment Of Graduate Writing
The ability of students to write according to accepted professional standards is a direct indicator of the English MA and MFA programs' success in producing graduates who have acquired appropriate critical thinking, researching, and writing skills and are prepared for future professional endeavors. To that end, a significant amount of student writing is required in English graduate coursework.
To assess the effectiveness of class writing assignments in developing students' ability to make sophisticated arguments about literature, language, and writing disciplines in a critical idiom appropriate to professional standards, the faculty will undertake an annual holistic review of representative graduate student writing produced during the reporting period.
Criterion
Standards For English Graduate Student Writing
At least 92% of representative graduate essays evaluated during the holistic assessment will be scored as acceptable or excellent (a combined score of 5 or higher on the scale described below).
A rubric for evaluating graduate student writing is attached.
Assessment Process:
1. To assure that the assessment reviews a representative sampling of writing, graduate professors in both long terms are asked to submit term papers or other significant writing from every third student listed on their class rosters.
2. Two primary readers from among the graduate English faculty independently read and score each essay under review; in the case of an unreliable result, the essay is referred to a secondary reader, who reads the essay independently, without any knowledge of the previous results (see number 5, below)
3. Each primary reader scores each essay on a 4-point scale, with a score of 4 the highest possible. The two primary scores are added to yield a total, with the final scores ranging from 8 (highest possible) to 2 (lowest possible). A combined score of 5 or higher is passing. A score of 7 or 8 indicates an excellent essay; a score of 5 or 6 indicates an acceptable essay; a score of 4 or less indicates an unacceptable essay.
4. Reliability of the two scores is assumed when both scores from the primary readers are congruent, that is, when they are within 1 point of each other. For example, a score of 6 that would be seen as reliable would mean that both readers marked the essay as a 3. A reliable score of 5 would mean that one reader assessed the essay as a 3 while the other reader assessed it as a 2.
5. Should the primary scores for an essay not be reliable—for example, a 4 and a 1, a 3 and a 1, a 4 and a 2—the essay is referred to a secondary reader. If that reader agrees with the higher score, the essay is certified as acceptable or excellent; if the secondary reader agrees with the lower score, the essay is certified as unacceptable.
Finding
Findings Of Holistic Writing Assessment
For the reporting periods from 2009-2012, an average 95% of essays read for the holistic assessment earned the exemplary-acceptable rating of 3-4. Because of a scheduling error, however, the department did not undertake the holistic review for the 2012-2013 reporting period. It will resume the assessment process for the 2013-2014 academic year.
Action
Developing Students' Writing Abilities
An average 95% of representative English graduate student writing reviewed over the previous three assessment periods met or exceeded the acceptable rating. Developing students' abilities to analyze, synthesize, and evaluate knowledge in writing, however, continues to be a primary program objective.
The first and most obvious action for measuring the program's success in accomplishing this objective is to resume the holistic review of graduate writing for the 2013-2014 assessment cycle. To that end, the graduate director has already gathered representative writing from Fall 2013 graduate classes.
The burden of introducing students to professional research methods and establishing standards for critical and expository writing rests largely with individual classroom instructors, who provide formal guidance, models, and assessment. Because of the variety of classroom writing assignments and the variety of ways in which professors approach writing, it is difficult to impose uniform standards upon writing from coursework. However, after consulting with the Director of Writing in the Disciplines, the graduate director will make any necessary revisions to the assessment rubric and will then supply graduate professors with the rubric, to make sure that all agree with the standards by which the program measures its success in achieving the objective.
There are other ways in which graduate faculty can also guide students in their progress toward independent critical thinking, researching, and writing. One suggestion is that each graduate faculty member serve as a mentor to a certain number of students, assigned at the beginning of a long term. While the graduate director would still be responsible for general advising and new student orientation, the mentors would be available for discussing class research and writing projects with their advisees.
Objective
Demonstrating Critical Thinking And Writing Skills And Breadth Of Knowledge: The Written Comprehensive Examination
English students will demonstrate that they have a graduate-level breadth of knowledge in literature, language, and writing disciplines and that they can express that knowledge in writing. The program's success in achieving this objective can be measured by the pass rate for the written comprehensive examination required of all students who take a graduate English degree at Sam Houston State University.
Indicator
The Written Comprehensive Examination
A passing score on the written comprehensive examination is a direct indicator that a student in English has acquired a breadth of knowledge in the subject, has developed critical reading and writing skills appropriate to a graduate-level education in English, and is well-prepared for future professional endeavors. For the examination, students choose three comprehensive areas from among thirteen broad topics in literature, language, and writing disciplines. To demonstrate their mastery of a broad range of materials, they are required to choose at least one British literature area and one American literature area and at least one early (pre-1800) British or American literary area and one later (post-1800) British or American literary area. For each area, students are given a reading list of works selected by faculty area experts.
During the exam itself, the student chooses one of three questions for each area and has two hours to respond to that question. A double-blind grading system is used to evaluate the candidates' proficiency. Three graduate faculty members read and evaluate each essay.
Criterion
Written Comprehensive Examination Pass Rate
At least 90% of examination essays will pass (with a grade of pass or high pass). The method of measuring the success in achieving the objective has changed since the previous assessment, when we counted the number of students who passed. Because most students who fail area exams pass them on a second take, the pass rate for essays themselves seems to be a more specific measurement of how well the exam assesses the success of the program in achieving the objective.
If we apply the new method of measuring this success to the exam results for Academic Year 2011-2012, 69% of essays passed.
An examination grading rubric and sample pass, fail, and high pass essays are attached.
Finding
Written Comprehensive Examination Results: 2012-2013
In Academic Year 2012-2013, thirteen students sat for comprehensive exams during three sessions (Fall, Spring, and Summer). A handful of these were students retaking area exams after having failed in the previous academic year.
(Two of the failing essays were the result of the students’ not having responded to questions. While we might adjust the pass rate percentage to account for this variable, presumably the two students failed to address the questions because they did not have the breadth of knowledge or critical thinking and writing skills that the exam measures; we include these essays among the failures.)
Observation about findings: The graduate faculty have expressed concern in the past about students who cluster their exam areas to ease the burden of preparation. Rather than spreading their areas over a broad range of literature, some, for example, will choose American liteature before 1800, 19th-century British literature, and 19th-century American literature; in so doing, they narrow the range to a mere 250 years or so. One significant finding for this assessment cycle, however, is that four of the six failing essays came from students who clustered their areas. The suggestion is that this strategy may be an indicator of general academic weakness or lack of confidence.
Conclusions about findings: The pass rate of 82% is an improvement over the 69% for Academic Year 2011-2012, but it still falls short of the projected 90%. (For a comparison of pass rates by exam area for the last four assessment cycles, see the attachment, "English Graduate Comprehensive Examination Pass Rate: Percentage/Number of Essays, 2009-2013.")
We have considered possible reasons for the failure to meet the projected pass rate:
(1) Students did not prepare well enough: They may have failed to give themselves enough time to read and synthesize all of the works and critical issues in a chosen area. They may have gambled by not reading all of the works on prescribed reading lists. Or their preparation may have been misdirected.
(2) Students did not receive adequate guidance in their preparation. The graduate director offers biannual exam prep sessions, and students are urged to consult faculty area experts in preparing for the exam. Not all students attend the prep sessions or seek out advice from area experts, however. And it is possible that the prep sessions do not adequately prepare the students.
Although students who fail essays sometimes complain that their graduate classes did not prepare them for the exams, it is difficult to establish a significant statistical correlation between coursework and the examination pass rate. For one thing, students sometimes take exam areas for which they have had no graduate coursework. For another, while the student's classes will suggest approaches for reading and analyzing literature and for synthesizing bodies of information, the exams are not tied specifically to courses. A professor may teach an MA-level survey of literature and English language that covers many works on an area reading list, but there is no contractual obligation that she or he do so.
Students are advised that the responsibility for reading all of the works on the area reading lists and for making comprehensive sense of them rests, finally, with them.
(3) Faculty expectations for the exam are too rigorous. While faculty do have high expectations for students’ performance on the exam, both the reading lists and the exam questions have been carefully suited to MA-level students in the discipline. We have also found that, since instituting the current exam system ten years ago, students find themselves much better prepared for PhD work and college teaching in the field.
(4) Testing circumstances affect the students’ performance. The graduate faculty readers are aware of the highly artificial—and too-often intimidating—circumstances under which students take the exam, and they make allowances for testing anxieties. However, the faculty also believe that a student who has prepared adequately will be able to perform well enough under these circumstances.
(5) The projected pass rate has been set too high. Perhaps the expectation that nine of ten essays pass is unrealistic.
Action
Preparing Students For The Written Comprehensive Examination
The ability to make an effective argument about any subject requires, first, a thorough knowledge of the subject. Students must understand that the burden of acquiring this knowledge through their independent reading and their classwork rests, finally, upon them.
Nonetheless, there are also processes by which the graduate faculty can help in preparing the students:
The graduate director continues to publish an exam prep booklet and to conduct biannual comprehensive examination prep sessions, during which he discusses the exam process, suggests strategies for preparing and for addressing exam questions, and presents exemplary questions and responses.
It is difficult to measure objectively how effective the exam prep sessions have been. Students are not required to attend, and the graduate director has not kept records to see if there is any correspondence between attendance and the pass rate. One suggestion for improving the pass rate, however, is that attendance at at least one such session be required. Beginning with the 2013-2014 reporting period, the graduate director will also keep records to see if attendance corresponds with success.
Other graduate faculty have been involved in the preparation process in two ways: Although the examination is expressly kept separate from classwork, some instructors use typical exam questions in their courses for midterm and/or final examinations, as a way of acclimating students to the comprehensive exam expectations and circumstances. Others give advice informally to students who approach them. Faculty members are not involved uniformly in this preparation, nor do we believe that they should have to be.
One suggestion, however, is that each graduate faculty member serve as a mentor to a certain number of students, assigned at the beginning of a long term. While the graduate director would still be responsible for general advising and new student orientation, the mentors would be available to provide specific advice and encouragements and to check on their students' progress in the program during the term. These faculty members could also give advice as the students prepare for the written examination.
As with the comprehensive examination prep sessions, measuring the effectiveness of a faculty mentor system objectively is difficult. Perhaps at least the program could require that students preparing for the examination meet at least once with their mentors for that purpose.
In response to the persistent failure to meet the projected 90% target pass rate, we also need to consider whether expecting nine of ten essays to pass is unrealistic. Perhaps the best way to do so is to gather information about comprehensive examination pass rates at peer institutions that have similar exams.
Objective
Demonstrating Critical Thinking Skills And Breadth Of Knowledge: Oral Argumentation
English graduate students will demonstrate their knowledge and critical thinking skills through oral arguments. We believe that the ability to make such arguments is necessary for future professional pursuits like teaching and further graduate education. The program's success in achieving this objective can be measured by the pass rate for the oral defense required of all thesis students and the oral comprehensive examination required of all non-thesis students.
Indicator
The Oral Examination
A passing grade on the oral examination required of all students who take the English MA or MFA degree at Sam Houston State University is a direct indicator that graduates are able to demonstrate their critical thinking skills and breadth of knowledge in the field. Thesis students sit for a one-hour oral defense of the thesis; having passed the written comprehensive examination, non-thesis students sit for a one-hour oral comprehensive examination covering the same three areas as those on the written exam. A committee of three graduate faculty members examines each student, awarding the candidate a pass, high pass, or fail, according to her or his ability to respond to specific questions. The committee for the oral defense of thesis comprises the members of the student’s reading committee; the oral comprehensive examination committee comprises area experts appointed by the graduate director.
Criterion
Oral Examination Pass Rate
At least 92% of degree candidates will pass the oral defense of thesis or oral comprehensive exam at the first sitting or upon retaking it.
Thesis defense and oral comprehensive exam grading rubrics are attached.
Finding
Oral Examination Results: 2012-2013
In Academic Year 2012-2013, four students sat for the oral defense of thesis and five sat for the oral comprehensive examination; all nine students passed at the first sitting.
Observation about findings: Despite the pass rate, faculty who sit on oral comprehensive exam committees have still expressed disappointment with the quality of the responses from some students, who, they feel, demonstrate weak arguments and marginal knowledge-base.
Conclusions about findings: While all students who sat for oral examinations during the assessment period passed, the distinction between those who earned a high pass and those who earned a pass is one measure or quality. Two of the four students who sat for the oral defense of thesis were awarded high passes; none of the five students who sat for the oral comprehensive examination were awarded high passes. There is also the less easily measurable anecdotal evidence of faculty who express disappointment with the general quality of students' arguments during the oral comprehensive exams.
One obvious reason for the discrepancy is that the expectations for knowledge in the two types of oral examinations are unequal: While thesis students know the subjects of their theses as well as, sometimes even better than the examining faculty and have a much narrower subject, oral comprehensive exam students are expected to demonstrate the same breadth of knowledge as that required for the written comprehensive examination. And while the atmosphere of the oral exam is presumably less formal, with faculty examiners sometimes offering hints or suggesting ways that students can approach responses, many of our students find the exam terrifying.
Action
Preparing Students To Make Oral Arguments
During the last four assessment cycles, all twenty students who have sat for an oral defense of thesis and all twenty-six students who have sat for an oral comprehensive examination have passed. The 100% pass rate does not suggest, however, that the program should relax its efforts to prepare its students for making oral arguments. Nor should it suggest that oral examinations are the only ways to measure the program's success in preparing students for making oral arguments.
While all students passed, the discrepancy in the quality of their oral arguments suggests that the department should discuss both the nature of the exams and the expectations: What purposes does the oral comprehensive examination serve? What should the examiners' expectations be? What does the department need to do improve the quality of students' responses during these exams?
Preparing the students for making oral arguments overlaps significantly both with preparing them for the written comprehensive examinations and with assessing their graduate-level critical and expository writing: All such endeavors require a thorough knowledge of the subject under discussion. Students are advised, first, to know well the subjects about which they are speaking.
To prepare the students specifically for making oral arguments, however, the department has considered requiring an oral component in one or more of their graduate classes, perhaps one that duplicates the circumstances of the thesis defense or oral comprehensive exam. One logical suggestion is that such a component be part of the research and methods class (ENGL 5330) required of English graduate students before they declare their degree plans.
As with efforts to prepare students for the written comprehensive examination, it is difficult to measure objectively the effectiveness of such a requirement, especially because 100% of students during the reporting period have passed the oral exam.
Formally instituting a faculty mentor system could also help if, for example, the mentors had their advisees sit for mock oral exams.
Faculty will also continue to urge students attend academic conferences, at which they must not only present their arguments about literature and language orally but also respond to any questions or challenges from the professional audience.