You are here

Graduate Literature Program

Overview: 

PROGRAM MISSION:

The primary mission of the Graduate Literature program is to provide rigorous and high quality graduate-level training for students who wish to prepare for academic careers in teaching and research at the college and university levels. To this end, the program offers a thorough grounding in the range of English and American literatures; the opportunity to do specialized advanced study within these fields as well as in interdisciplinary specializations such as American Indian Studies, Cultural Studies, Colonial and Post-Colonial Studies, Comparative Modernisms, Literary Theory, Gender and Sexuality Studies, Early Modern Studies, and Visual Cultures; an introduction to the profession and protocols of literary studies, including the presentation and publication of research and criticism; experience and training in research, criticism, theory, and academic writing, culminating in a Ph.D. dissertation; experience and training in teaching at the college/university level; and systematic training and guidance to prepare students for the academic job search. Course requirements are flexible to allow for both a broad range of literary study at the outset of students’ graduate training and appropriate specialization for more advanced students.

Expected Learning Outcomes: 

STUDENT LEARNING OUTCOMES FOR THE MA AND PHD PROGRAMS:

The program’s overarching learning outcomes for students at the MA level include:

  • General knowledge of the history and diversity of literatures in English
  • Specific knowledge of canonical texts in a wide range of literary genres and an awareness of their relation to literary history and historical context
  • Ability to engage in and contribute to scholarly debates both orally and in writing
  • Ability to conduct research and incorporate that research into sophisticated and well-crafted arguments that respond to critical debates in the field.

Overarching learning outcomes for students at the PhD level include:

  • Ability to produce significant contributions to peer-reviewed research in the student’s chosen fields.
  • Ability to contribute to both the undergraduate and graduate teaching missions of programs in literature located in post-secondary institutions, as indicated by
    • the student’s demonstrated broad knowledge of the field as well as
    • the confluence of her or his areas of expertise with typical curricula in undergraduate majors and graduate literature programs.
  • Ability to contribute actively to scholarly and pedagogical learning environments, as indicated by
    • publishable-quality written work,
    • conference presentations,
    • examination performance, and
    • indications of the potential to contribute actively to community-based learning.
  • Ability to contribute to the academic community positively and appropriately in both written and spoken contexts
Assessment Activities: 

 DIRECT ASSESSMENT OPPORTUNITIES:

The Literature Program’s degree of success in enabling students to meet or surpass the above overarching learning outcomes is measured by pooling and evaluating the evidence accumulated through the following direct assessment instruments of individual student performance (indirect assessment instruments are detailed in a subsequent section of this document). Each evaluative instrument measures student learning outcomes on the occasion of an important program milestone. At the MA level we assess student (and program) performance on the following:
• MA written examination
• MA oral examination
• Qualifying paper (submitted by students who want to continue to the PhD)

The following program milestones at the PhD level are used for direct assessment of the above overarching learning outcomes:
• Comprehensive written examination
• Comprehensive oral examination
• Dissertation
• Mock job interview (for those students who sign up for this voluntary activity)

RUBRICS FOR DIRECT ASSESSMENT INSTRUMENTS:

We measure student learning outcomes on each of the above milestone occasions through faculty discussion (in which all members of the student’s committee take part) culminating in individual faculty members evaluating several specific aspects of the student’s learning by scoring the performance using the specific rubric devised for that program milestone. (As noted above, all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design.) All our direct assessment instruments ask faculty to score student performance on a 4 point scale (Surpasses Program Standards, Meets Program Standards, Barely Meets Program Standards, Fails to Meet Program Standards) in relation to several particular learning outcomes and to provide an overall assessment of the student’s performance. Space for additional comments by individual faculty members is also provided.

(Please note: to see the assessment instruments as they will actually be distributed for filling out by faculty and students—formatted as tables—please consult the UA Provost’s Assessment website at http://assessment.arizona.edu/hum/English%20Grad . In what follows, tables have been converted to bullet point form to save space.)

DIRECT ASSESSMENT INSTRUMENTS AT THE MA LEVEL:

MA Written Examination:
Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Comprehension of and facility with the general traditions of English literature
2. Ability to analyze literature
3. Ability to apply literary analysis to a general narrative of English literary traditions and conventions
4. Ability to write clearly and cogently
5. Overall assessment of student learning

MA Oral Examination:
Students proceed to the Oral Exam if the committee agrees that the MA Written offers sufficient evidence that they are ready to do so. The Oral is the more important of the two exams.

Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Depth of knowledge of the work of those writers who have established and/or transformed their literary genres
2. Depth of knowledge of the work of those writers who have established and/or transformed their literary periods
3. Breadth of knowledge as to the impact the work of significant authors has had on major literary genres
4. Breadth of knowledge as to the impact the work of significant authors has had on major literary periods
5. Breadth and depth of knowledge of earlier periods (through 18th Century)
6. Breadth and depth of knowledge of later periods (19th Century forward)
7. Ability to move cogently and convincingly among and between texts and groups of texts in response to questions and comments posed by scholars in a variety of fields
8. Critical acumen in the discussion of individual texts
9. Critical acumen in the discussion of broad groupings of texts (e.g., literary periods, literary movements, and the like)
10. Overall assessment of student learning

Qualifying Paper:
The Qualifying paper offers evidence of a student's readiness to pursue a Ph.D. in literature.

Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Significant progress toward presenting original work in a professional manner as established by major, peer-reviewed journals and presses in the fields relevant to the paper's purview
2. Superior professional writing ability, as measured by the work published by established scholars in peer-reviewed venues
3. Critical judgment and acumen in choosing and digesting the current academic conversations relevant to the paper's purview
4. An ability to generate well-supported analysis, grounded in the current critical conversations relevant to the paper's purview
5. An ability to ground the analysis in the major historical lines of the critical and theoretical conversations that precede the current critical conversation
6. Demonstration in the paper that the student has grasped the full range of pertinent scholarship and established his or her argument as one that will contribute to our understanding of the literary issues the paper raises
7. Superior grasp of the mechanics of formal scholarly documentation and style
8. Overall assessment of student learning

In assessing the Qualifying Paper, one committee member (typically the Chair) also evaluates the Writing Sample the student submitted as part of his or her application to the program (using the identical rubric to that used in scoring performance on the Qualifying Paper). Scores on the two papers are compared for evidence of significant progress (or the lack thereof) during the student’s tenure in the MA program. (All scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design)

DIRECT ASSESSMENT INSTRUMENTS AT THE PHD LEVEL:

Comprehensive Written Examination:
Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Breadth and depth of knowledge of a literary genre, period, and two major authors
2. A professional-level ability to situate discussions prompted by the committee's written questions in an appropriately chosen scholarly approach
3. A professional-level awareness of the state of the scholarly fields explicitly or implicitly invoked by the committee's written questions
4. A professional-level ability to organize the allotted time to produce a coherent and finished response to the written questions of the committee
5. Ability to address the written questions directly and cogently
6. Clarity of expression
7. Overall assessment of student learning

Comprehensive Oral Examination:
Students proceed to the Comprehensive Oral Examination if the committee agrees that the written examination offers sufficient evidence that they are ready to do so. The Oral is the more important of the two exams.

Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program through changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Sufficiently thorough grasp of the material such that an ability to move among and between texts is demonstrated in response to the committee's oral questions
2. Depth of knowledge of the history of the academic fields represented by the items on each of the fields covered by the exam lists
3. Breadth of knowledge of the way in which the fields are situated in academic conversations, past and present, in peer-reviewed journals and presses
4. Depth of knowledge of each individual text on each list
5. Breadth of knowledge of the part each text takes in defining, extending, and/or challenging the fields in which they play a significant part
6. Ability to directly, cogently, and explicitly respond to the specific questions posed by the committee
7. Ability to move freely among texts and groups of texts in constructing on the spot analyses and arguments
8. Overall assessment of student learning

Dissertation Proposal
Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
9. Evidence of the scholarly research already conducted by the writer
10. Evidence of knowledge of the academic conversations the dissertation intends to engage, extending back to the first appearance of the relevant scholarly discussion
11. A substantial proposed working bibliography, including the major critical and theoretical works likely to influence the dissertation
12. A clear and concise rationale for the choice of each major text the dissertation will take up
13. A discussion which delineates with precision the key arguments the dissertation proposes to make.
14. A clear statement of the original contribution the dissertation proposes to make to the relevant fields.

Dissertation:
Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Formal academic writing ability that compares favorably to work published in peer-reviewed journals and presses
2. Knowledge of the history of criticism and theory related to the fields of inquiry engaged by the topic of the dissertation
3. Ability to craft an analysis and argument spanning the dissertation that compares favorably to work published in peer-reviewed journals and presses
4. Scholarly work that makes an original contribution to the academic fields it engages
5. Critical and textual acumen in the treatment of both individual texts and the scholarly fields in which they are located
6. Overall assessment of student learning

Mock Interview (for those students who sign up for this voluntary activity)
Employing discussion followed by individual scoring of each rubric by each faculty member, the committee assesses the following aspects of student performance and also provides an overall assessment (all scores accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design). Scoring scale for each item: Surpasses, Meets, Barely Meets, Fails to Meet.
1. Ability to field and cogently address questions that are likely to be posed by a hiring committee in the student's fields
2. Professional self-presentation
3. Thorough knowledge of the fields addressed by the dissertation
4. Ability to present expertise in the student's teaching fields
5. Thorough preparation about the school and program that is posited as doing the interview
6. Overall assessment of student learning

INDIRECT ASSESSMENT:

Formal indirect assessment takes the form of exit interviews of students immediately upon completion of the MA and PhD programs. Students are asked to provide rated responses to questions about the value of key program milestones as well as the program as a whole; they are also asked for additional comments. Queries such as “how valuable a part of your training was the mock interview?” use a 4 point scale (very valuable, valuable, not very valuable, not valuable). All exit interviews accumulated by the program are periodically pooled, evaluated, and presented to the literature faculty for consideration of possible improvements to the program though changes in program design.

MA Indirect Assessment Instrument:
How valuable a part of your training was each of the following? Scoring scale for each item: Very Valuable, Valuable, Not Very Valuable, Not Valuable.
1. seminar work
2. mentoring by individual faculty members
3. the First Year Literature Colloquium
4. MA examination (preparation for the examination as well as the actual written and oral exams)
5. Qualifying paper
Scoring scale the for following item: Excellent, Good, Fair, Poor.
6. Please evaluate the performance of the program as a whole in preparing you for the profession

PhD Indirect Assessment Instrument:
How valuable a part of your training was each of the following? Scoring scale for each item: Very Valuable, Valuable, Not Very Valuable, Not Valuable.
1. seminar work
2. mentoring by individual faculty members
3. the First Year Literature Colloquium
4. Comprehensive Examination (preparation for the examination as well as the actual written and oral exams)
5. Dissertation
6. Job Placement Seminar (if you took it)
7. Mock interview (if you did one)
Scoring scale for the following item: Excellent, Good, Fair, Poor.
8. Please evaluate the performance of the program as a whole in preparing you for the profession

NOTES ON INFORMAL ASSESSMENT:

The program has historically used the following informal assessment procedures to evaluate not only individual students but the program itself: results of individual student performance are informally aggregated by faculty in periodic group discussions and have served as one important basis for devising and implementing changes in program design. While such informal assessment activities will continue to be part of program evaluation, the formal assessment procedures recently put in place and detailed above are now our most important means of evidence-based program assessment and should lead to continuing program renovation.

Individual Evaluation in Graduate Seminars:
The means of assessment vary, but consist primarily of evaluation of class presentations, participation, and especially seminar papers. Some professors also require take-home essay tests. Several seminars include the public presentation of research in a small conference setting, providing preparation and practice for presentations at national conferences in the field. In periodic discussion by the faculty as a group, such assessment of individual student performance on key program learning outcomes is informally aggregated and leads to periodic changes in program design.

Review of Student Teaching:
Nearly all entering students are offered GATships, which form the central element in their pedagogical preparation. Each semester, the Director of Graduate Studies reviews each GAT’s teaching evaluations, which include TCE report summaries, TEAD (Teaching Advisor) reports, and a written self-analysis. This assessment serves as a basis for counseling the student and for determining eligibility for literature teaching and advanced TAing opportunities. GATs assigned to literature teaching are supervised by a member of the literature faculty. The overwhelming majority of teaching done by GATs in our program is in courses offered and supervised by the Writing Program. Thus programmatic assessment based on GAT performance falls to the Writing Program with the Literature Program playing only an advisory role when invited to do so.

Annual Report:
Every January, each graduate student in the program submits a ten-page annual report analyzing his/her academic progress, achievements, teaching activities and development, research activities, publications, and presentations for the preceding year, and identifying goals for the coming year. This report serves as a basis for a meeting or meetings between each student in the program and the Director to assess individual progress toward the degree, identify areas of academic strength and weakness, and plan an ongoing course of study. At the same time, the Director accumulates evidence from individual written reports and follow-up interviews and periodically presents the results to the faculty for discussion and consideration of possible changes in program design. Moreover, the annual report also asks the student to evaluate the graduate literature program, including suggestions about how it might better meet student needs. These results, too, are periodically aggregated and presented to the faculty for consideration of possible changes in program design.

Recent Program Changes in Response to Informal Program Evaluation

The Graduate Literature Program has recently made a number of adjustments to program design, an outgrowth of faculty discussion and vote based on the informal assessments outlined above. During the 2010-2011 academic year, the program:

• Approved a more detailed and stringent Satisfactory Progress policy in response to infrequent but sometimes protracted cases of student difficulty with timely progress and/or performance in seminars as measured by grades (See Appendix 9 for the text of the new Satisfactory Progress policy.)
• Approved minor changes to the procedures for selecting and supervising GATs assigned to teach literature or film courses
• Approved minor changes to the rules governing the Comprehensive Examination and the pertinent passages in the Literature Program Handbook

The faculty also agreed to table, for future consideration, a proposal from the Graduate Literature Committee concerning the reading lists for the Comprehensive Examination.

Implementation Guidelines

The Graduate Literature program will begin implementation of the above formal assessment activities in AY 2012-2013. At the end of the 2012-2013 academic year, the program director will call a meeting at which he or she will present initial findings and invite discussion of any proposed modifications to the Graduate Literature program (which must be approved by a majority of those voting in a subsequent mail ballot of the literature faculty). The assessment instruments themselves will not be subject to modification until the end of the following academic year (AY 2013-2014), so that a significant pool of data can be accumulated using a consistent set of criteria. At the end of the year meeting for AY 2013-2014, the program director will present the cumulative results of the pooled data, and program faculty will have the option to propose program modifications based on the data as well as modifications of the assessment instruments themselves (changes to the program or to the assessment instruments must be approved by a majority of those voting in a mail ballot of the literature faculty).

Assessment Findings: 

PROCEDURES:

In both Spring 2014 and Spring 2015 the faculty of the Literature program reviewed assessment findings, which had been compiled by the Program Director and the Graduate Literature Committee. In Spring 2014 we reviewed assessment data collected in 2013 (Spring 2013 and Fall 2013), while in Spring 2015 we reviewed the cumulative data collected in both 2013 and 2014 (Spring and Fall 2013 and Spring and Fall 2014). Summaries of the findings in Spring 2014 and Spring 2015 appear in sequence below.

 

SPRING 2014 ASSESSMENT FINDINGS:

MA Written Exam

A strong majority of faculty ranked students' performance on the MA written exam as "Surpassing" expectations in every criterion. The rubric in which student performance is the strongest measures "Ability to write clearly and cogently." 74% of student surpassed expectations in this rubric, and the remaining 26% met expectations. The rubric with the widest assessment spread (rubric #3) measures "Ability to apply literary analysis to a general narrative of English literary traditions and conventions"; for this rubric, 16 students surpassed expectations, 7 met them, and 4 barely met them. Data suggest that the program is serving students well by every measure but that we should offer more courses that develop student outcomes specified in rubric #3.

MA Oral Exam

As is the case in all data related to the Graduate Literature Program, no student failed to meet the expected outcomes for any performance criterion. The majority of students surpassed expectations for the overall assessment of student learning, and all of the remainder met those expectations. However, 7% barely met expectations in several categories related to knowledge of literary fields and literary genres. The data suggest that the program should offer more courses that increase students' knowledge of broad periods and genres.

Terminal MA Exit Interview  (indirect  assessment  instrument)

The data indicate that our terminal MA is serving students very well. 100% of respondents rated every aspect of the program "Very valuable" or "Valuable." One rubric asked about the value of the Qualifying Paper (QP); because the terminal MA students do not submit a QP, they indicated that the experience wa "Not valuable." The program should omit the QP rubric, since it is not applicable to the terminal MA degree.

PhD  Qualifying Paper

There is a greater spread in the data for the QP than for any other assessment instrument. The wider range may be because two of the exam members ar.e not chosen by the student and are independent assessors. While no student failed to meet expectations for any of the rubrics, there were four categories (out of 8 total) in which 25% of the students barely met expectations. Three of those four categories assessed students' ability to deploy a range of historical scholarship, and the fourth category in which 25% barely met expectations compared the students' writing ability to that of published scholars in peer­ reviewed venues. The faculty's assessment of overall student learning was that 92% of students met or surpassed expectations for the QP, while only one student barely met the overall expectations.

Comprehensive Written Exam

Our findings give no indication that we need to make major changes to the program. Overall, students are meeting expectations. In general, the faculty agreement is encouraging. Typically, faculty respond within one evaluative category of each other, so faculty assessments exhibit a high degree of agreement.

Comprehensive Oral Exam

The great majority of students meet expectations. However, in the penultimate rubric, 40% barely met the desired outcome, which is a higher percentage than we want, so we hope to improve the results for the following rubric: "Ability to move freely among texts and groups of texts in constructing on the spot analyses and arguments." No one failed to meet expectations in any of the rubrics for either the written or the oral comprehensive exam.

Dissertation

All but one of the dissertation defenses surpassed or met expectations. All of the "barely met" assessments were from one student's defense. The outcome in which students were strongest was knowledge of theory and criticism related to their project. The findings suggest that, overall, the program meets expectations for the dissertation.

Mock Interview

Data suggest that we do not need to make major changes to the program. Student performance surpassed or met expectations in every aspect measured.

_______________________________________

SPRING 2015 ASSESSMENT FINDINGS:

The cumulative assessment findings from 2013 and 2014 continue to suggest that a high majority of students in the program are achieving the Learning Outcomes stipulated by program faculty. Each of the direct assessment instruments, at both the MA and PhD levels, confirms this positive view. Moreover, indirect assessment instruments, also at both the MA and PhD levels, reveal students’ high opinion of the training they have received in the program.

 

It is worth noting that after two years these findings remain preliminary: since we are dealing with a selective graduate program rather than a large undergraduate population, the data pool is still small. 

 

Below is a summary table of results to date, followed by an explanation of some of the column headings. There then follow detailed tables showing results for each of the individual categories on each of the instruments.

 

The following table gives scores for faculty assessment of “overall” student performance on each of the direct assessment instruments, and for student assessment of the program on the indirect assessment instruments. (The final question on each of the instruments asks for an assessment of “overall” performance.)

 

exam: overall

% surpasses

% meets

% barely meets

% fails to meet

tot %

% surpasses + meets

% barely + fails

questionnaires

committee size

# students

# students barely/fail

MA written

45

47

7

0

99

92

7

55

3

18.3

1.3

MA oral

46

37

17

0

100

83

17

52

3

17.3

2.9

Quals

40

57

3

0

100

97

3

29

4

7.3

0.2

Comps Writtens

46

38

14

2

100

84

16

50

4

12.5

2.0

Comps Orals

42

42

14

2

100

84

16

50

4

12.5

2.0

Dissertation

79

16

4

0

99

95

4

34

3

11.3

0.5

Mock Interview

12

76

6

6

100

88

12

17

3

5.7

0.7

 

 

 

 

 

 

 

 

 

 

 

 

exit (indirect)

excellent

good

fair

poor

 

ex/good

fair/poor

 

 

 

 

MA

38

62

0

0

100

100

0

13

1

13.0

0.0

PhD

50

25

25

0

100

75

25

4

1

4.0

1.0

 

A note on the column headers:

 

Columns 2-5 give percentages of faculty who scored performance at each of the four levels of achievement (surpasses, meets, barely meets, fails to meet.) (For the indirect assessment instruments: the number of students who judged the program excellent, good, fair, poor.)

 

Column 6 adds up the percentages in columns 2-5 (the occasional 99 rather than 100 results from rounding the percentages in columns 2-5).

 

Column 7 pools the “surpasses” and “meets” percentages, providing a snapshot of “successful” outcomes. (For the indirect assessment instruments: “excellent” or “good.”)

 

Column 8 pools “barely meets” and “fails to meet,” providing a snapshot “less successful” plus “unsuccessful” outcomes. (For the indirect assessment instruments: “fair” or “poor.”)

 

The next columns show the size of the data pool:

 

Column 9 shows the total number of questionnaires filled out for each instrument.

 

Column 10 shows the committee size for each exam (or, for the indirect assessments, the number of students at a time filling out the questionnaire—which is always 1)

 

Column 11 divides column 9 by column 10, to show (roughly) the total number of students evaluated so far on each instrument (or, for the indirect assessment instruments, the total number of students who have evaluated the program). Figures which are not whole numbers result from the rare instances in which an examiner does not fill out a questionnaire or leaves the “overall” score blank.

 

Column 12 multiples column 11 (number of students) by column 7 (percentage scored “barely meets” or “fails to meet”), showing the total number of students evaluated as “barely meeting” or “failing to meet” the learning objectives assessed on each instrument. (Or, for the indirect instruments: the number of students judging the program “fair” or poor.”) Column 12, in effect, is at this point cautionary: for example, while 16% of the questionnaires for the Comps Orals judge a student as “barely meeting” or “failing to meet” the overall learning objectives assessed in this exam, that translates to 2.0 students; it’s not clear that 16% is a figure high enough to suggest that changes to the program might be desirable, but, even if it is, it would likely be premature to make such changes on the basis of just 2 student exams. (For the indirect assessment instruments: only 1 student, so far, has judged either the MA or the PhD program “fair” or “poor.”)

 

But perhaps the most significant column at this point is column 7, which shows the percentage of scores on each direct assessment instrument that judge performance as surpassing or meeting the learning objectives (or, on the indirect assessment instruments, that regard the program as “excellent” or “good”): in the literature faculty’s Spring 2015 review of assessment data to date, we regarded the figures in this column as confirmation that the program is doing well at preparing students in the field of literary studies in English.

 

Below are individual tables for each assessment instrument, showing results for each of the sub-categories of performance on each (as well as separate figures for 2013 and 2014). The columns are similar to what they are for the “overall” table discussed above (the columns in the table above detailing the size of the data pool have been eliminated for the tables below).

 

DIRECT ASSESSMENT INSTRUMENTS

 

MA Written

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Comprehension of and facility with the general traditions of English literature

 

2013

17

9

1

0

27

 

 

 

2014

9

15

4

0

28

 

 

 

total

26

24

5

0

55

 

 

 

%

47

44

9

0

100

91

9

2.      Ability to analyze literature

 

2013

 

 

 

 

0

 

 

 

2014

15

12

0

0

27

 

 

 

total

15

12

0

0

27

 

 

 

%

56

44

0

0

100

100

0

3.      Ability to apply literary analysis to a general narrative of English literary traditions and conventions

 

2013

 

 

 

 

0

 

 

 

2014

16

7

4

0

27

 

 

 

total

16

7

4

0

27

 

 

 

%

59

26

15

0

100

85

15

4.      Ability to write clearly and cogently

 

2013

20

7

0

0

27

 

 

 

2014

10

14

4

0

28

 

 

 

total

30

21

4

0

55

 

 

 

%

55

38

7

0

100

93

7

5.      Overall assessment of student learning

 

2013

16

11

0

0

27

 

 

 

2014

9

15

4

0

28

 

 

 

total

25

26

4

0

55

 

 

 

%

45

47

7

0

100

93

7

 

 

MA Oral

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Depth of knowledge of the work of those writers who have established and/or transformed their literary genres

 

2013

13

11

2

0

26

 

 

 

2014

10

7

10

0

27

 

 

 

total

23

18

12

0

53

 

 

 

%

43

34

23

0

100

77

23

2.      Depth of knowledge of the work of those writers who have established and/or transformed their literary periods

 

2013

14

10

2

0

26

 

 

 

2014

11

7

9

0

27

 

 

 

total

25

17

11

0

53

 

 

 

%

47

32

21

0

100

79

21

3.      Breadth of knowledge as to the impact the work of significant authors has had on major literary genres

 

2013

12

14

0

0

26

 

 

 

2014

11

6

10

0

27

 

 

 

total

23

20

10

0

53

 

 

 

%

43

38

19

0

100

81

19

4.      Breadth of knowledge as to the impact the work of significant authors has had on major literary periods

 

2013

15

11

0

0

26

 

 

 

2014

10

8

9

0

27

 

 

 

total

25

19

9

0

53

 

 

 

%

47

36

17

0

100

83

17

5.      Breadth and depth of knowledge of earlier periods (through 18th Century)

 

2013

14

12

0

0

26

 

 

 

2014

11

9

7

0

27

 

 

 

total

25

21

7

0

53

 

 

 

%

47

40

13

0

100

87

13

6.      Breadth and depth of knowledge of later periods (19th Century forward)

 

2013

15

11

0

0

26

 

 

 

2014

11

8

7

0

26

 

 

 

total

26

19

7

0

52

 

 

 

%

50

37

13

0

100

87

13

7.      Ability to move cogently and convincingly among and between texts and groups of texts in response to questions and comments posed by scholars in a variety of fields

 

2013

13

10

2

0

25

 

 

 

2014

11

8

8

0

27

 

 

 

total

24

18

10

0

52

 

 

 

%

46

35

19

0

100

81

19

8.      Critical acumen in the discussion of individual texts

 

2013

14

10

1

0

25

 

 

 

2014

9

10

7

1

27

 

 

 

total

23

20

8

1

52

 

 

 

%

44

38

15

2

100

83

17

9.      Critical acumen in the discussion of broad groupings of texts (e.g., literary periods, literary movements, and the like)

 

2013

10

13

2

0

25

 

 

 

2014

10

8

8

1

27

 

 

 

total

20

21

10

1

52

 

 

 

%

38

40

19

2

100

79

21

10.  Overall assessment of student learning

 

2013

13

12

0

0

25

 

 

 

2014

11

7

9

0

27

 

 

 

total

24

19

9

0

52

 

 

 

%

46

37

17

0

100

83

17

 

Quals

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Significant progress toward presenting original work in a professional manner as established by major, peer-reviewed journals and presses in the fields relevant to the paper's purview

 

 

 

 

 

 

 

 

 

 

2013

2.5

8.5

1

0

12

 

 

 

2014

9

8

0

0

17

 

 

 

total

11.5

16.5

1

0

29

 

 

 

%

40

57

3

0

100

97

3

2.      Superior professional writing ability, as measured by the work published by established scholars in peer-reviewed venues

 

2013

1.5

7.5

3

0

12

 

 

 

2014

8

9

0

0

17

 

 

 

total

9.5

16.5

3

0

29

 

 

 

%

33

57

10

0

100

90

10

3.      Critical judgment and acumen in choosing and digesting the current academic conversations relevant to the paper's purview

 

2013

3.5

5.5

3

0

12

 

 

 

2014

8

6

3

0

17

 

 

 

total

11.5

11.5

6

0

29

 

 

 

%

40

40

21

0

100

79

21

4.      An ability to generate well-supported analysis, grounded in the current critical conversations relevant to the paper's purview

 

2013

2.5

8.5

1

0

12

 

 

 

2014

7

10

0

0

17

 

 

 

total

9.5

18.5

1

0

29

 

 

 

%

33

64

3

0

100

97

3

5.      An ability to ground the analysis in the major historical lines of the critical and theoretical conversations that precede the current critical conversation

 

 

 

 

 

 

 

 

 

 

2013

1.5

7.5

3

0

12

 

 

 

2014

5

10

2

0

17

 

 

 

total

6.5

17.5

5

0

29

 

 

 

%

22

60

17

0

100

83

17

6.      Demonstration in the paper that the student has grasped the full range of pertinent scholarship and established his or her argument as one that will contribute to our understanding of the literary issues the paper raises

 

 

 

 

 

 

 

 

 

 

2013

1

8

3

0

12

 

 

 

2014

6

8

3

0

17

 

 

 

total

7

16

6

0

29

 

 

 

%

24

55

21

0

100

79

21

7.      Superior grasp of the mechanics of formal scholarly documentation and style

 

2013

2

8

2

0

12

 

 

 

2014

10

7

0

0

17

 

 

 

total

12

15

2

0

29

 

 

 

%

41

52

7

0

100

93

7

8.      Overall assessment of student learning

 

2013

3.5

7.5

1

0

12

 

 

 

2014

8

9

0

0

17

 

 

 

total

11.5

16.5

1

0

29

 

 

 

%

40

57

3

0

100

97

3

 

Comps Oral

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

 

1.      Sufficiently thorough grasp of the material such that an ability to move among and between texts is demonstrated in response to the committee's oral questions

 
 

 

2013

0

16

3

0

19

 

 

 

 

2014

22

3

6

1

32

 

 

 

 

total

22

19

9

1

51

 

 

 

 

%

43

37

18

2

100

80

20

 

2.      Depth of knowledge of the history of the academic fields represented by the items on each of the fields covered by the exam lists

 

 

2013

1

14

4

0

19

 

 

 

 

2014

20

6

4

2

32

 

 

 

 

total

21

20

8

2

51

 

 

 

 

%

41

39

16

4

100

80

20

 

3.      Breadth of knowledge of the way in which the fields are situated in academic conversations, past and present, in peer-reviewed journals and presses

 

 

 

 

 

 

 

 

 

 

 

 

2013

0

15

4

0

19

 

 

 

 

2014

18

8

4

2

32

 

 

 

 

total

18

23

8

2

51

 

 

 

 

%

35

45

16

4

100

80

20

 

4.      Depth of knowledge of each individual text on each list

 

 

2013

 

 

 

 

0

 

 

 

 

2014

22

4

4

1

31

 

 

 

 

total

22

4

4

1

31

 

 

 

 

%

71

13

13

3

100

84

16

 

5.      Breadth of knowledge of the part each text takes in defining, extending, and/or challenging the fields in which they play a significant part

 

 

2013

0

18

1

0

19

 

 

 

 

2014

19

6

5

1

31

 

 

 

 

total

19

24

6

1

50

 

 

 

 

%

38

48

12

2

100

86

14

 

6.      Ability to directly, cogently, and explicitly respond to the specific questions posed by the committee

 

 

2013

0

15

4

0

19

 

 

 

 

2014

20

5

4

2

31

 

 

 

 

total

20

20

8

2

50

 

 

 

 

%

40

40

16

4

100

80

20

 

7.      Ability to move freely among texts and groups of texts in constructing on the spot analyses and arguments

 

 

2013

0

11

8

0

19

 

 

 

 

2014

20

5

4

2

31

 

 

 

 

total

20

16

12

2

50

 

 

 

 

%

40

32

24

4

100

72

28

 

8.      Overall assessment of student learning

 

 

2013

0

16

3

0

19

 

 

 

 

2014

21

5

4

1

31

 

 

 

 

total

21

21

7

1

50

 

 

 

 

%

42

42

14

2

100

84

16

 

 

Mock Interview

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Ability to field and cogently address questions that are likely to be posed by a hiring committee in the student's fields

 

2013

0

2

0

0

2

 

 

 

2014

0

13

3

0

16

 

 

 

total

0

15

3

0

18

 

 

 

%

0

83

17

0

100

83

17

2.      Professional self-presentation

 

2013

2

0

0

0

2

 

 

 

2014

7

8

1

0

16

 

 

 

total

9

8

1

0

18

 

 

 

%

50

44

6

0

100

94

6

3.      Thorough knowledge of the fields addressed by the dissertation

 

2013

1

1

0

0

2

 

 

 

2014

7

8

1

0

16

 

 

 

total

8

9

1

0

18

 

 

 

%

44

50

6

0

100

94

6

4.      Ability to present expertise in the student's teaching fields

 

2013

0

2

0

0

2

 

 

 

2014

2

9

4

1

16

 

 

 

total

2

11

4

1

18

 

 

 

%

11

61

22

6

100

72

28

5.      Thorough preparation about the school and program that is posited as doing the interview

 

2013

0

2

0

0

2

 

 

 

2014

3

5

5

3

16

 

 

 

total

3

7

5

3

18

 

 

 

%

17

39

28

17

100

56

44

6.      Overall assessment of student learning

 

2013

0

2

0

0

2

 

 

 

2014

2

11

1

1

15

 

 

 

total

2

13

1

1

17

 

 

 

%

12

76

6

6

100

88

12

 

Dissertation

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Formal academic writing ability that compares favorably to work published in peer-reviewed journals and presses

 

2013

6

5

2

0

13

 

 

 

2014

20

1

0

0

21

 

 

 

total

26

6

2

0

34

 

 

 

%

76

18

6

0

100

94

6

2.      Knowledge of the history of criticism and theory related to the fields of inquiry engaged by the topic of the dissertation

 

2013

5

7.5

0.5

0

13

 

 

 

2014

21

0

0

0

21

 

 

 

total

26

7.5

0.5

0

34

 

 

 

%

76

22

1

0

100

99

1

3.      Ability to craft an analysis and argument spanning the dissertation that compares favorably to work published  in peer-reviewed journals and presses

 

 

 

 

 

 

 

 

 

 

2013

5

5

3

0

13

 

 

 

2014

21

0

0

0

21

 

 

 

total

26

5

3

0

34

 

 

 

%

76

15

9

0

100

91

9

4.      Scholarly work that makes an original contribution to the academic fields it engages

 

2013

5

6

2

0

13

 

 

 

2014

21

0

0

0

21

 

 

 

total

26

6

2

0

34

 

 

 

%

76

18

6

0

100

94

6

5.      Critical and textual acumen in the treatment of both individual texts and the scholarly fields in which they are located

 

2013

5

6

2

0

13

 

 

 

2014

21

0

0

0

21

 

 

 

total

26

6

2

0

34

 

 

 

%

76

18

6

0

100

94

6

6.      Overall assessment of student learning

 

2013

6

5.5

1.5

0

13

 

 

 

2014

21

0

0

0

21

 

 

 

total

27

5.5

1.5

0

34

 

 

 

%

79

16

4

0

100

96

4

 

 

 

INDIRECT ASSESSMENT INSTRUMENTS

MA exit (indirect)

 

very valuable

valuable

not very valuable

not valuable

total

surpasses/meets

barely/fails

1.      seminar work

 

2013

1

1

0

0

2

 

 

 

2014

3

8

0

0

11

 

 

 

total

4

9

0

0

13

 

 

 

%

31

69

0

0

100

100

0

2.      mentoring by individual faculty members

 

2013

2

0

0

0

2

 

 

 

2014

8

3

0

0

11

 

 

 

total

10

3

0

0

13

 

 

 

%

77

23

0

0

100

100

0

3.      the First Year Literature Colloquium

 

2013

0

0

0

0

0

 

 

 

2014

0

0

0

0

0

 

 

 

total

0

0

0

0

0

 

 

 

%

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

4.      MA examination (preparation for the examination as well as the actual written and oral exams)

 

2013

1

1

0

0

2

 

 

 

2014

7

3

1

0

11

 

 

 

total

8

4

1

0

13

 

 

 

%

62

31

8

0

100

92

8

5.      Qualifying paper

 

2013

0

0

0

2

2

 

 

 

2014

1

3

2

0

6

 

 

 

total

1

3

2

2

8

 

 

 

%

13

38

25

25

100

50

50

6.      Please evaluate the performance of the program as a whole in preparing you for the profession

 

 

excellent

good

fair

poor

 

excellent/good

fair/poor

 

2013

1

1

0

0

2

 

 

 

2014

4

7

0

0

11

 

 

 

total

5

8

0

0

13

 

 

 

%

38

62

0

0

100

100

0

 

PhD exit (indirect)   very valuable valuable not very valuable not valuable total surpasses/meets barely/fails
1.      seminar work
  2013 0 0 0 0 0    
  2014 4 0 0 0 4    
  total 4 0 0 0 4    
  % 100 0 0 0 100 100 0
2.      mentoring by individual faculty members
  2013 0 0 0 0 0    
  2014 2 2 0 0 4    
  total 2 2 0 0 4    
  % 50 50 0 0 100 100 0
3.      the First Year Literature Colloquium
  2013 0 0 0 0 0    
  2014 0 0 0 0 0    
  total 0 0 0 0 0    
  % #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0! #DIV/0!
4.      Comprehensive Examination (preparation for the examination as well as the actual written and oral exams)
  2013 0 0 0 0 0    
  2014 3 1 0 0 4    
  total 3 1 0 0 4    
  % 75 25 0 0 100 100 0
5.      Dissertation
  2013 0 0 0 0 0    
  2014 4 0 0 0 4    
  total 4 0 0 0 4    
  % 100 0 0 0 100 100 0
6.      Job Placement Seminar (if you took it)
  2013 0 0 0 0 0    
  2014 0 3 1 0 4    
  total 0 3 1 0 4    
  % 0 75 25 0 100 75 25
7.      Mock interview (if you did one)
  2013 0 0 0 0 0    
  2014 1 1 0 0 2    
  total 1 1 0 0 2    
  % 50 50 0 0 100 100 0
8.      Please evaluate the performance of the program as a whole in preparing you for the profession 
    excellent good fair poor   excellent/good fair/poor
  2013 0 0 0 0 0    
  2014 2 1 1 0 4    
  total 2 1 1 0 4    
  % 50 25 25 0 100 75 25

 

Additional Notes:

 

On both indirect assessment instruments, the question asking for evaluation of the First Year Literature Colloquium was inadvertently omitted; it will be added to the forms.

 

 

SPRING 2017 ASSESSMENT FINDINGS

(Cumulative findings: data from 2013, 2014, 2015, 2016)

 

The cumulative assessment data from 2013 through 2016 continue to suggest that a high majority of students in the program are achieving the Learning Outcomes stipulated by program faculty. Each of the direct assessment instruments, at both the MA and PhD levels, confirms this positive view. Moreover, indirect assessment instruments, also at both the MA and PhD levels, reveal students’ high opinion of the training they have received in the program.

 

After four years of data collection, these findings suggest a stable record of program success in achieving our learning outcomes: though we are dealing with a selective graduate program rather than a large undergraduate population and the data pool is accordingly relatively small, a clear picture has nonetheless emerged. 

 

Below is a summary table of results to date, followed by an explanation of some of the column headings. There then follow detailed tables showing results for each of the individual categories on each of the instruments.

 

The following table gives cumulative scores, for 2013-2016, for faculty assessment of “overall” student performance on each of the direct assessment instruments, and for student assessment of the program on the indirect assessment instruments. (The final question on each of the instruments asks for an assessment of “overall” performance.)


 

 

 

exam: overall

% surpasses

% meets

% barely meets

% fails to meet

tot %

% surpasses + meets

% barely + fails

questionnaires

committee size

# students

# students fail

MA written

50

45

5

0

100

95

5

85

3

28.3

0.0

MA oral

48

40

12

0

100

88

12

81

3

27.0

0.0

Quals

50

48

2

0

100

98

2

41

4

10.3

0.0

Comps Writtens

51

33

14

1

99

84

15

78

4

19.5

0.2

Comps Orals

50

37

12

1

100

87

13

82

4

20.5

0.2

Dissertation

76

20

4

0

100

96

4

38

3

12.7

0.0

Mock Interview

12

76

6

6

100

88

12

17

3

5.7

0.3

 

 

 

 

 

 

 

 

 

 

 

 

exit (indirect)

excellent

good

fair

poor

 

ex/good

fair/poor

 

 

 

 

MA

48

48

5

0

101

96

5

21

1

21.0

0.0

PhD

43

43

14

0

100

86

14

7

1

7.0

0.0

 

A note on the column headers:

 

Columns 2-5 give percentages of faculty who scored performance at each of the four levels of achievement (surpasses, meets, barely meets, fails to meet.) (For the indirect assessment instruments: the number of students who judged the program excellent, good, fair, poor.)

 

Column 6 adds up the percentages in columns 2-5 (the occasional 99 rather than 100 results from rounding the percentages in columns 2-5).

 

Column 7 pools the “surpasses” and “meets” percentages, providing a snapshot of “successful” outcomes. (For the indirect assessment instruments: “excellent” or “good.”)

 

Column 8 pools “barely meets” and “fails to meet,” providing a snapshot “less successful” plus “unsuccessful” outcomes. (For the indirect assessment instruments: “fair” or “poor.”)

 

The next columns show the size of the data pool:

 

Column 9 shows the total number of questionnaires filled out for each instrument.

 

Column 10 shows the committee size for each exam (or, for the indirect assessments, the number of students at a time filling out the questionnaire—which is always 1)

 

Column 11 divides column 9 by column 10, to show (roughly) the total number of students evaluated so far on each instrument (or, for the indirect assessment instruments, the total number of students who have evaluated the program). Figures which are not whole numbers result from the rare instances in which an examiner does not fill out a questionnaire or leaves the “overall” score blank.

 

Column 12 multiples column 11 (number of students) by column 5 (percentage scored “fails to meet”), showing the total number of students evaluated as “failing to meet” the learning objectives assessed on each instrument. (Or, for the indirect instruments: the number of students judging the program poor.”) The fact that these numbers are either “0” or extremely small is a reminder that the total number of students failing to meet our learning outcomes as measured by all our assessment instruments is well within appropriate limits for the program, as is the number of students who evaluate the program as “poor.”.

 

But perhaps the most significant column at this point is column 7, which shows the percentage of scores on each direct assessment instrument that judge performance as surpassing or meeting the learning objectives (or, on the indirect assessment instruments, that regard the program as “excellent” or “good”): in the literature faculty’s Spring 2015 review of assessment data to date, we regarded the figures in this column as confirmation that the program is doing well at preparing students in the field of literary studies in English.

 

Below are individual tables for each assessment instrument, showing results for each of the sub-categories of performance on each (as well as separate figures for each year). The columns are similar to what they are for the “overall” table discussed above (the columns in the table above detailing the size of the data pool have been eliminated for the tables below).

 

 

MA Written

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Comprehension of and facility with the general traditions of English literature

 

2013

17

9

1

0

 

27

 

 

 

 

2014

9

15

4

0

 

28

 

 

 

 

2015

7

3

2

0

 

12

 

 

 

 

2016

11

6

0

0

 

17

 

 

 

 

total

44

33

7

0

 

84

 

 

 

 

%

52

39

8

0

 

100

 

92

8

2.      Ability to analyze literature

 

2013

 

 

 

 

 

0

 

 

 

 

2014

15

12

0

0

 

27

 

 

 

 

2015

8

2

2

0

 

12

 

 

 

 

2016

9

8

0

0

 

17

 

 

 

 

total

23

14

2

0

 

39

 

 

 

 

%

59

36

5

0

 

100

 

95

5

3.      Ability to apply literary analysis to a general narrative of English literary traditions and conventions

 

2013

 

 

 

 

 

0

 

 

 

 

2014

16

7

4

0

 

27

 

 

 

 

2015

5

5

2

0

 

12

 

 

 

 

2016

9

8

0

0

 

17

 

 

 

 

total

21

12

6

0

 

39

 

 

 

 

%

54

31

15

0

 

100

 

85

15

4.      Ability to write clearly and cogently

 

2013

20

7

0

0

 

27

 

 

 

 

2014

10

14

4

0

 

28

 

 

 

 

2015

9

3

0

0

 

12

 

 

 

 

2016

11

7

0

0

 

18

 

 

 

 

total

39

24

4

0

 

67

 

 

 

 

%

58

36

6

0

 

100

 

94

6

5.      Overall assessment of student learning

 

2013

16

11

0

0

 

27

 

 

 

 

2014

9

15

4

0

 

28

 

 

 

 

2015

8

4

0

0

 

12

 

 

 

 

2016

9

8

0

0

 

18

 

 

 

 

total

42

38

4

0

 

84

 

 

 

 

%

50

45

5

0

 

100

 

95

5

 

 

 

MA Oral

 

surpasses

meets

barely meets

fails to meet

total

surpasses/meets

barely/fails

1.      Depth of knowledge of the work of those writers who have established and/or transformed their literary genres

 

2013

13

11

2

0

26

 

 

 

2014

10

7

10

0

27

 

 

 

2015

7

3

2

0

12

 

 

 

2016

9

6

2

0

17

 

 

 

total

39

27

16

0

82

 

 

 

%

48

33

20

0

100

80

20

2.      Depth of knowledge of the work of those writers who have established and/or transformed their literary periods

 

2013

14

10

2

0

26

 

 

 

2014

11

7

9

0

27

 

 

 

2015

7

4

1

0

12

 

 

 

2016

10

5

2

0

17

 

 

 

total

42

26

14

0

82

 

 

 

%

51

32

17

0

100

83

17

3.      Breadth of knowledge as to the impact the work of significant authors has had on major literary genres

 

2013

12

14

0

0

26

 

 

 

2014

11

6

10

0

27

 

 

 

2015

5

6

1

0

12

 

 

 

2016

10

6

1

0

17

 

 

 

total

38

32

12

0

82

 

 

 

%

46

39

15

0

100

85

15

4.      Breadth of knowledge as to the impact the work of significant authors has had on major literary periods

 

2013

15

11

0

0

26

 

 

 

2014

10

8

9

0

27

 

 

 

2015

6

5

1

0

12

 

 

 

2016

8

8

1

0

17

 

 

 

total

31

24

10

0

65

 

 

 

%

48

37

15

0

100

85

15

5.      Breadth and depth of knowledge of earlier periods (through 18th Century)

 

2013

14

12

0

0

26

 

 

 

2014

11

9

7

0

27

 

 

 

2015

8

3

1

0

12

 

 

 

2016

8

8

1

0

17

 

 

 

total

41

32

9

0

82

 

 

 

%

50

39

11

0

100

89

11

6.      Breadth and depth of knowledge of later periods (19th Century forward)

 

2013

15

11

0

0

26

 

 

 

2014

11

8

7

0

26

 

 

 

2015

8

3

1

0

12

 

 

 

2016

11

5

1

0

17

 

 

 

total

45

27

9

0

81

 

 

 

%

56

33

11

0

100

89

11

7.      Ability to move cogently and convincingly among and between texts and groups of texts in response to questions and comments posed by scholars in a variety of fields

 

2013

13

10

2

0

25

 

 

 

2014

11

8

8

0

27

 

 

 

2015

7

3

2

0

12

 

 

 

2016

8

9

0

0

17

 

 

 

total

39

30

12

0

81

 

 

 

%

48

37

15

0

100

85

15

8.      Critical acumen in the discussion of individual texts

 

2013

14

10

1

0

25

 

 

 

2014

9

10

7

1

27

 

 

 

2015

3

7

2

0

12

 

 

 

2016

8

9

0

0

17

 

 

 

total

34

36

10

1

81

 

 

 

%

42

44

12

1

100

86

14

9.      Critical acumen in the discussion of broad groupings of texts (e.g., literary periods, literary movements, and the like)

 

2013

10

13

2

0

25

 

 

 

2014

10

8

8

1

27

 

 

 

2015

7

4

1

0

12

 

 

 

2016

7

8

2

0

17

 

 

 

total

34

33

13

1

81

 

 

 

%

42

41

16

1

100

83

17

10.  Overall assessment of student learning

 

2013

13

12

0

0

25

 

 

 

2014

11

7

9

0

27

 

 

 

2015

6

5

1

0

12

 

 

 

2016

9

8

0

0

17

 

 

 

total

39

32

10

0

81

 

 

 

%

48

40

12

0

100

88

12

 

 

 

Quals

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Significant progress toward presenting original work in a professional manner as established by major, peer-reviewed journals

 

 

 

 

 

 

 

 

 

 

 

and presses in the fields relevant to the paper's purview

 

 

 

 

 

 

 

 

 

 

2013

2.5

8.5

1

0

 

12

 

 

 

 

2014

9

8

0

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

6

1

1

0

 

8

 

 

 

 

total

20.5

18.5

2

0

 

41

 

 

 

 

%

50

45

5

0

 

100

 

95

5

2.      Superior professional writing ability, as measured by the work published by established scholars in peer-reviewed venues

 

 

 

 

 

 

 

 

 

 

 

2013

1.5

7.5

3

0

 

12

 

 

 

 

2014

8

9

0

0

 

17

 

 

 

 

2015

4

0

0

0

 

4

 

 

 

 

2016

4

3

1

0

 

8

 

 

 

 

total

17.5

19.5

4

0

 

41

 

 

 

 

%

43

48

10

0

 

100

 

90

10

3.      Critical judgment and acumen in choosing and digesting the current academic conversations relevant to the paper's purview

 

 

 

 

 

 

 

 

 

 

 

2013

3.5

5.5

3

0

 

12

 

 

 

 

2014

8

6

3

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

6

2

0

0

 

8

 

 

 

 

total

20.5

14.5

6

0

 

41

 

 

 

 

%

50

35

15

0

 

100

 

85

15

4.      An ability to generate well-supported analysis, grounded in the current critical conversations relevant to the paper's purview

 

 

 

 

 

 

 

 

 

 

 

2013

2.5

8.5

1

0

 

12

 

 

 

 

2014

7

10

0

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

6

2

0

0

 

8

 

 

 

 

total

18.5

21.5

1

0

 

41

 

 

 

 

%

45

52

2

0

 

100

 

98

2

5.      An ability to ground the analysis in the major historical lines of the critical and theoretical conversations

 

 

 

 

 

 

 

 

 

 

 

that precede the current critical conversation

 

 

 

 

 

 

 

 

 

 

2013

1.5

7.5

3

0

 

12

 

 

 

 

2014

5

10

2

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

6

2

0

0

 

8

 

 

 

 

total

15.5

20.5

5

0

 

41

 

 

 

 

%

38

50

12

0

 

100

 

88

12

6.      Demonstration in the paper that the student has grasped the full range of pertinent scholarship and established his or her argument

 

 

 

 

 

 

 

 

 

 

 

as one that will contribute to our understanding of the literary issues the paper raises

 

 

 

 

 

 

 

 

 

 

2013

1

8

3

0

 

12

 

 

 

 

2014

6

8

3

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

5

2

1

0

 

8

 

 

 

 

total

15

19

7

0

 

41

 

 

 

 

%

37

46

17

0

 

100

 

83

17

7.      Superior grasp of the mechanics of formal scholarly documentation and style

 

 

 

 

 

 

 

 

 

 

 

2013

2

8

2

0

 

12

 

 

 

 

2014

10

7

0

0

 

17

 

 

 

 

2015

4

0

0

0

 

4

 

 

 

 

2016

5

2

1

0

 

8

 

 

 

 

total

21

17

3

0

 

41

 

 

 

 

%

51

41

7

0

 

100

 

93

7

8.      Overall assessment of student learning

 

 

 

 

 

 

 

 

 

 

 

2013

3.5

7.5

1

0

 

12

 

 

 

 

2014

8

9

0

0

 

17

 

 

 

 

2015

3

1

0

0

 

4

 

 

 

 

2016

6

2

0

0

 

8

 

 

 

 

total

20.5

19.5

1

0

 

41

 

 

 

 

%

50

48

2

0

 

100

 

98

2

 

 

 

Comps Writtens

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Breadth and depth of knowledge of a literary genre, period, and two major authors

 

 

 

 

 

 

 

 

 

 

 

2013

6

13

0

0

 

19

 

 

 

 

2014

19

8

4

1

 

32

 

 

 

 

2015

0

1

3

0

 

4

 

 

 

 

2016

17

5

2

0

 

24

 

 

 

 

total

42

27

9

1

 

79

 

 

 

 

%

53

34

11

1

 

100

 

87

13

2.      A professional-level ability to situate discussions prompted by the committee's written questions in an appropriately chosen scholarly approach

 

 

 

 

 

 

 

 

 

 

 

2013

6

7

6

0

 

19

 

 

 

 

2014

17

8

6

1

 

32

 

 

 

 

2015

0

0

3

1

 

4

 

 

 

 

2016

14

7

3

0

 

24

 

 

 

 

total

37

22

18

2

 

79

 

 

 

 

%

47

28

23

3

 

101

 

75

25

3.      A professional-level awareness of the state of the scholarly fields explicitly or implicitly invoked by the committee's written questions

 

 

 

 

 

 

 

 

 

 

 

2013

6

10

3

0

 

19

 

 

 

 

2014

15

10

4

2

 

31

 

 

 

 

2015

0

1

2

1

 

4

 

 

 

 

2016

15

7

2

0

 

24

 

 

 

 

total

36

28

11

3

 

78

 

 

 

 

%

46

36

14

4

 

100

 

82

18

4.      A professional-level ability to organize the allotted time to produce a coherent and finished response

 

 

 

 

 

 

 

 

 

 

 

to the written questions of the committee

 

 

 

 

 

 

 

 

 

 

2013

6

10

3

0

 

19

 

 

 

 

2014

16

10

2

3

 

31

 

 

 

 

2015

0

0

3

1

 

4

 

 

 

 

2016

17

6

1

0

 

24

 

 

 

 

total

39

26

9

4

 

78

 

 

 

 

%

50

33

12

5

 

100

 

83

17

5.      Ability to address the written questions directly and cogently

 

 

 

 

 

 

 

 

 

 

 

2013

6

9

4

0

 

19

 

 

 

 

2014

18

9

3

1

 

31

 

 

 

 

2015

0

1

3

0

 

4

 

 

 

 

2016

17

5

2

0

 

24

 

 

 

 

total

41

24

12

1

 

78

 

 

 

 

%

53

31

15

1

 

100

 

83

17

6.      Clarity of expression

 

 

 

 

 

 

 

 

 

 

 

2013

6

12

1

0

 

19

 

 

 

 

2014

14

13

4

0

 

31

 

 

 

 

2015

0

0

3

1

 

4

 

 

 

 

2016

16

6

2

0

 

24

 

 

 

 

total

36

31

10

1

 

78

 

 

 

 

%

46

40

13

1

 

100

 

86

14

7.      Overall assessment of student learning

 

 

 

 

 

 

 

 

 

 

 

2013

6

10

3

0

 

19

 

 

 

 

2014

17

9

4

1

 

31

 

 

 

 

2015

0

1

3

0

 

4

 

 

 

 

2016

17

6

1

0

 

24

 

 

 

 

total

40

26

11

1

 

78

 

 

 

 

%

51

33

14

1

 

100

 

85

15

 

 

 

Comps Oral

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Sufficiently thorough grasp of the material such that an ability to move among and between texts is demonstrated

 

 

 

 

 

 

 

 

 

 

 

in response to the committee's oral questions

 

 

 

 

 

 

 

 

 

 

2013

0

16

3

0

 

19

 

 

 

 

2014

22

3

6

1

 

32

 

 

 

 

2015

3

2

3

0

 

8

 

 

 

 

2016

18

6

0

0

 

24

 

 

 

 

total

43

27

12

1

 

83

 

 

 

 

%

52

33

14

1

 

100

 

84

16

2.      Depth of knowledge of the history of the academic fields represented by the items on each of the fields covered by the exam lists

 

 

 

 

 

 

 

 

 

 

 

2013

1

14

4

0

 

19

 

 

 

 

2014

20

6

4

2

 

32

 

 

 

 

2015

4

1

1

2

 

8

 

 

 

 

2016

15

9

0

0

 

24

 

 

 

 

total

40

30

9

4

 

83

 

 

 

 

%

48

36

11

5

 

100

 

84

16

3.      Breadth of knowledge of the way in which the fields are situated in academic conversations, past and present,

 

 

 

 

 

 

 

 

 

 

 

in peer-reviewed journals and presses

 

 

 

 

 

 

 

 

 

 

2013

0

15

4

0

 

19

 

 

 

 

2014

18

8

4

2

 

32

 

 

 

 

2015

4

1

2

1

 

8

 

 

 

 

2016

13

10

1

0

 

24

 

 

 

 

total

35

34

11

3

 

83

 

 

 

 

%

42

41

13

4

 

100

 

83

17

4.      Depth of knowledge of each individual text on each list

 

 

 

 

 

 

 

 

 

 

 

2013

 

 

 

 

 

0

 

 

 

 

2014

22

4

4

1

 

31

 

 

 

 

2015

4

1

3

0

 

8

 

 

 

 

2016

17

8

0

0

 

25

 

 

 

 

total

26

5

7

1

 

39

 

 

 

 

%

67

13

18

3

 

100

 

79

21

5.      Breadth of knowledge of the part each text takes in defining, extending, and/or challenging the fields in which they play a significant part

 

 

 

 

 

 

 

 

 

 

 

2013

0

18

1

0

 

19

 

 

 

 

2014

19

6

5

1

 

31

 

 

 

 

2015

4

0

4

0

 

8

 

 

 

 

2016

13

9

2

0

 

24

 

 

 

 

total

36

33

12

1

 

82

 

 

 

 

%

44

40

15

1

 

100

 

84

16

6.      Ability to directly, cogently, and explicitly respond to the specific questions posed by the committee

 

 

 

 

 

 

 

 

 

 

 

2013

0

15

4

0

 

19

 

 

 

 

2014

20

5

4

2

 

31

 

 

 

 

2015

3

2

2

1

 

8

 

 

 

 

2016

17

7

0

0

 

24

 

 

 

 

total

40

29

10

3

 

82

 

 

 

 

%

49

35

12

4

 

100

 

84

16

7.      Ability to move freely among texts and groups of texts in constructing on the spot analyses and arguments

 

 

 

 

 

 

 

 

 

 

 

2013

0

11

8

0

 

19

 

 

 

 

2014

20

5

4

2

 

31

 

 

 

 

2015

4

1

2

1

 

8

 

 

 

 

2016

16

7

1

0

 

24

 

 

 

 

total

40

24

15

3

 

82

 

 

 

 

%

49

29

18

4

 

100

 

78

22

8.      Overall assessment of student learning

 

 

 

 

 

 

 

 

 

 

 

2013

0

16

3

0

 

19

 

 

 

 

2014

21

5

4

1

 

31

 

 

 

 

2015

4

1

3

0

 

8

 

 

 

 

2016

16

8

0

0

 

24

 

 

 

 

total

41

30

10

1

 

82

 

 

 

 

%

50

37

12

1

 

100

 

87

13

 

 

Dissertation

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Formal academic writing ability that compares favorably to work published in peer-reviewed journals and presses

 

 

 

 

 

 

 

 

 

 

 

2013

6

5

2

0

 

13

 

 

 

 

2014

20

1

0

0

 

21

 

 

 

 

2015

1

3

0

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

27

9

2

0

 

38

 

 

 

 

%

71

24

5

0

 

100

 

95

5

2.      Knowledge of the history of criticism and theory related to the fields of inquiry engaged by the topic of the dissertation

 

 

 

 

 

 

 

 

 

 

 

2013

5

7.5

0.5

0

 

13

 

 

 

 

2014

21

0

0

0

 

21

 

 

 

 

2015

2

2

0

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

28

9.5

0.5

0

 

38

 

 

 

 

%

74

25

1

0

 

100

 

99

1

3.      Ability to craft an analysis and argument spanning the dissertation that compares favorably to work published

 

 

 

 

 

 

 

 

 

 

 

 in peer-reviewed journals and presses

 

 

 

 

 

 

 

 

 

 

2013

5

5

3

0

 

13

 

 

 

 

2014

21

0

0

0

 

21

 

 

 

 

2015

2

2

0

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

28

7

3

0

 

38

 

 

 

 

%

74

18

8

0

 

100

 

92

8

4.      Scholarly work that makes an original contribution to the academic fields it engages

 

 

 

 

 

 

 

 

 

 

 

2013

5

6

2

0

 

13

 

 

 

 

2014

21

0

0

0

 

21

 

 

 

 

2015

2

1

1

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

28

7

3

0

 

38

 

 

 

 

%

74

18

8

0

 

100

 

92

8

5.      Critical and textual acumen in the treatment of both individual texts and the scholarly fields in which they are located

 

 

 

 

 

 

 

 

 

 

 

2013

5

6

2

0

 

13

 

 

 

 

2014

21

0

0

0

 

21

 

 

 

 

2015

2

2

0

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

28

8

2

0

 

38

 

 

 

 

%

74

21

5

0

 

100

 

95

5

6.      Overall assessment of student learning

 

 

 

 

 

 

 

 

 

 

 

2013

6

5.5

1.5

0

 

13

 

 

 

 

2014

21

0

0

0

 

21

 

 

 

 

2015

2

2

0

0

 

4

 

 

 

 

2016

 

 

 

 

 

0

 

 

 

 

total

29

7.5

1.5

0

 

38

 

 

 

 

%

76

20

4

0

 

100

 

96

4

 

 

Mock Interview

 

surpasses

meets

barely meets

fails to meet

 

total

 

surpasses/meets

barely/fails

1.      Ability to field and cogently address questions that are likely to be posed by a hiring committee in the student's fields

 

 

 

 

 

 

 

 

 

 

 

2013

0

2

0

0

 

2

 

 

 

 

2014

0

13

3

0

 

16

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

0

15

3

0

 

18

 

 

 

 

%

0

83

17

0

 

100

 

83

17

2.      Professional self-presentation

 

 

 

 

 

 

 

 

 

 

 

2013

2

0

0

0

 

2

 

 

 

 

2014

7

8

1

0

 

16

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

9

8

1

0

 

18

 

 

 

 

%

50

44

6

0

 

100

 

94

6

3.      Thorough knowledge of the fields addressed by the dissertation

 

 

 

 

 

 

 

 

 

 

 

2013

1

1

0

0

 

2

 

 

 

 

2014

7

8

1

0

 

16

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

8

9

1

0

 

18

 

 

 

 

%

44

50

6

0

 

100

 

94

6

4.      Ability to present expertise in the student's teaching fields

 

 

 

 

 

 

 

 

 

 

 

2013

0

2

0

0

 

2

 

 

 

 

2014

2

9

4

1

 

16

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

2

11

4

1

 

18

 

 

 

 

%

11

61

22

6

 

100

 

72

28

5.      Thorough preparation about the school and program that is posited as doing the interview

 

 

 

 

 

 

 

 

 

 

 

2013

0

2

0

0

 

2

 

 

 

 

2014

3

5

5

3

 

16

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

3

7

5

3

 

18

 

 

 

 

%

17

39

28

17

 

100

 

56

44

6.      Overall assessment of student learning

 

 

 

 

 

 

 

 

 

 

 

2013

0

2

0

0

 

2

 

 

 

 

2014

2

11

1

1

 

15

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

2

13

1

1

 

17

 

 

 

 

%

12

76

6

6

 

100

 

88

12

 

 

 

MA exit (indirect)

 

very valuable

valuable

not very valuable

not valuable

 

total

 

surpasses/meets

barely/fails

1.      seminar work

 

 

 

 

 

 

 

 

 

 

 

2013

1

1

0

0

 

2

 

 

 

 

2014

3

8

0

0

 

11

 

 

 

 

2015

1

0

1

0

 

2

 

 

 

 

2016

5

1

0

0

 

6

 

 

 

 

total

10

10

1

0

 

21

 

 

 

 

%

48

48

5

0

 

100

 

95

5

2.      mentoring by individual faculty members

 

 

 

 

 

 

 

 

 

 

 

2013

2

0

0

0

 

2

 

 

 

 

2014

8

3

0

0

 

11

 

 

 

 

2015

1

1

0

0

 

 

 

 

 

 

2016

4

2

0

0

 

 

 

 

 

 

total

15

6

0

0

 

21

 

 

 

 

%

71

29

0

0

 

100

 

100

0

3.      the First Year Literature Colloquium

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

0

0

0

0

 

0

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

0

0

0

0

 

0

 

 

 

 

%

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

 

#DIV/0!

 

#DIV/0!

#DIV/0!

4.      MA examination (preparation for the examination as well as the actual written and oral exams)

 

 

 

 

 

 

 

 

 

 

 

2013

1

1

0

0

 

2

 

 

 

 

2014

7

3

1

0

 

11

 

 

 

 

2015

0

1

1

0

 

2

 

 

 

 

2016

5

1

0

0

 

6

 

 

 

 

total

13

6

2

0

 

21

 

 

 

 

%

62

29

10

0

 

100

 

90

10

5.      Qualifying paper

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

2

 

2

 

 

 

 

2014

1

3

2

0

 

6

 

 

 

 

2015

1

0

0

0

 

1

 

 

 

 

2016

1

2

0

0

 

3

 

 

 

 

total

3

5

2

2

 

12

 

 

 

 

%

25

42

17

17

 

100

 

67

33

6.      Please evaluate the performance of the program as a whole in preparing you for the profession

 

 

 

 

 

 

 

 

 

 

 

 

excellent

good

fair

poor

 

 

 

excellent/good

fair/poor

 

2013

1

1

0

0

 

2

 

 

 

 

2014

4

7

0

0

 

11

 

 

 

 

2015

0

1

1

0

 

2

 

 

 

 

2016

5

1

0

0

 

6

 

 

 

 

total

10

10

1

0

 

21

 

 

 

 

%

48

48

5

0

 

100

 

95

5

 

 

PhD exit (indirect)

 

very valuable

valuable

not very valuable

not valuable

 

total

 

surpasses/meets

barely/fails

1.      seminar work

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

4

0

0

0

 

4

 

 

 

 

2015

1

1

0

0

 

2

 

 

 

 

2016

1

0

0

0

 

1

 

 

 

 

total

6

1

0

0

 

7

 

 

 

 

%

86

14

0

0

 

100

 

100

0

2.      mentoring by individual faculty members

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

2

2

0

0

 

4

 

 

 

 

2015

1

1

0

0

 

2

 

 

 

 

2016

0

1

0

0

 

1

 

 

 

 

total

3

4

0

0

 

7

 

 

 

 

%

43

57

0

0

 

100

 

100

0

3.      the First Year Literature Colloquium

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

0

0

0

0

 

0

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

0

0

0

0

 

0

 

 

 

 

%

#DIV/0!

#DIV/0!

#DIV/0!

#DIV/0!

 

#DIV/0!

 

#DIV/0!

#DIV/0!

4.      Comprehensive Examination (preparation for the examination as well as the actual written and oral exams)

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

3

1

0

0

 

4

 

 

 

 

2015

1

1

0

0

 

2

 

 

 

 

2016

0

1

0

0

 

1

 

 

 

 

total

4

3

0

0

 

7

 

 

 

 

%

57

43

0

0

 

100

 

100

0

5.      Dissertation

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

4

0

0

0

 

4

 

 

 

 

2015

2

0

0

0

 

2

 

 

 

 

2016

0

1

0

0

 

1

 

 

 

 

total

6

1

0

0

 

7

 

 

 

 

%

86

14

0

0

 

100

 

100

0

6.      Job Placement Seminar (if you took it)

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

0

3

1

0

 

4

 

 

 

 

2015

0

1

0

0

 

1

 

 

 

 

2016

0

1

0

0

 

1

 

 

 

 

total

0

4

1

0

 

5

 

 

 

 

%

0

80

20

0

 

100

 

80

20

7.      Mock interview (if you did one)

 

 

 

 

 

 

 

 

 

 

 

2013

0

0

0

0

 

0

 

 

 

 

2014

1

1

0

0

 

2

 

 

 

 

2015

0

0

0

0

 

0

 

 

 

 

2016

0

0

0

0

 

0

 

 

 

 

total

1

1

0

0

 

2

 

 

 

 

%

50

50

0

0

 

100

 

100

0

8.      Please evaluate the performance of the program as a whole in preparing you for the profession

 

 

 

 

 

 

 

 

 

 

 

 

excellent

good

fair

poor

 

 

 

excellent/good

fair/poor

 

2013

0

0

0

0

 

0

 

 

 

 

2014

2

1

1

0

 

4

 

 

 

 

2015

1

1

0

0

 

2

 

 

 

 

2016

0

1

0

0

 

1

 

 

 

 

total

3

3

1

0

 

7

 

 

 

 

%

43

43

14

0

 

100

 

86

14

 

 

 

Change in Response to Findings: 

Spring 2015:

At noted above in “assessment findings,” the literature faculty agrees that the data collected so far paint a very positive picture of the program. Given the small data pool, these results are preliminary. For the same reason, however, we agree it would be a mistake to make major changes to the program on the basis of the few individual sub-categories in a few of the assessment instruments that suggest possible trouble spots.

 

We have, however, agreed to make a small change to the Job Search Workshop in response to one result in the Mock Interview direct assessment data. On item 4, “ability to present expertise in the student’s teaching fields,” 28% of faculty responses scored the interviewee in the “barely meets” or “fails to meet” range. In our own recent job searches, faculty committees have repeatedly noted that weakness in this area (at actual MLA interviews) is surprisingly common and regularly serves as a basis for eliminating several candidates from the campus visit pool. The Job Search Workshop will accordingly stress this area of interview preparation even more strongly than we have in the past; mock interview committees, in the post-mock-interview de-briefing period, will likewise stress this aspect of interview preparation and review with the mock-interviewee the strengths and weaknesses of his or her performance.

 

We decided that another low-scoring area in the mock interview questionnaires doesn’t merit increased focus: faculty judged the performance of 44% of interviewees at the “barely meets” or “fails to meet” level in area 5, “thorough preparation about the school and program that is posited as doing the interview.” After discussion, we decided that this is an allowable lapse in the mock interview context: it’s not clear that students should spend the several hours of work necessary to develop impressive expertise about the curriculum, the student body, and departmental faculty research and publications, though in the Job Search Workshop we will continue to indicate that, for real interviews, such preparation is essential.

 

One response in the PhD exit interview indirect assessment instrument may merit continued attention, though the faculty did not yet come to any decision about it: the only facet of the program about which students seemed conspicuously lukewarm was the preparation and submission of the Qualifying Paper (for admission to the PhD level of the program): 50% of respondents (4 of 8) rated these activities “not very valuable” or “not valuable.” This bears further watching.d

 

 

SPRING 2017

 

Changes in Response to Assessment Findings: Spring 2017

 

At noted above in “assessment findings,” the literature faculty agrees that the data collected so far paint a very positive picture of the program. Each assessment instrument presents a strongly positive overall picture of the success of that facet of our program. Since areas of relative weakness are unusual and isolated, and since our data pool is still relatively small, we will continue to monitor those individual  facets of student learning rather than propose immediate programmatic changes on the basis of the few individual sub-categories in a few of the assessment instruments in which performance is relatively less strong.

 

We will continue to monitor performance on the following few specific skill sets, directing our faculty to be especially attentive to these aspects of student learning:

 

Comprehensive exam: writtens

 

“A professional-level ability to situate discussions prompted by the committee's written questions in an appropriately chosen scholarly approach”:

  • exceeds / meets 75%; barely meets, fails to meet 25%
  • exceeds 47%, meets 28% , barely meets 23%, fails to meet 3%

This skill requires students to triangulate primary text, committee question, and pertinent scholarship and theory; faculty will be notified that they may want to work specifically on this skill set with candidates as they prepare for the Comprehensive exam.

 

 

Comprehensive exam: oral

 

“Depth of knowledge of each individual text on each list”

  • exceeds / meets 79%; barely meets, fails to meet 21%
  • exceeds 67%, meets 13% , barely meets 18%, fails to meet 3%

This skill set requires students to have reviewed individual texts recently enough to be able to adduce textual details pertinent to unanticipated faculty questions; faculty will be notified that they may want to work specifically on this skill set with candidates as they prepare for the Comprehensive exam.

 

“Ability to move freely among texts and groups of texts in constructing on the spot analyses and arguments”

  • exceeds / meets 78%; barely meets, fails to meet 22%
  • exceeds 49%, meets 29% , barely meets 18%, fails to meet 4%

This skill set requires on-the-spot flexibility and ingenuity; pre-exam practice in mock-exam exercises may be useful in developing this skill. Faculty will be notified that they may want to work specifically on this skill set with candidates as they prepare for the Comprehensive exam.

 

 

Mock interview

 

The data set for the mock interviews is identical to what  it was when we addressed issues raised by this assessment instrument in our 2015 assessment findings and recommended actions in response to those findings.

 

We found two areas in which performance was not optimal:

 

“Ability to present expertise in the student's teaching fields.”

  • exceeds / meets 72%; barely meets, fails to meet 28%
  • exceeds 11%, meets 61% , barely meets 22%, fails to meet 6%

 

As we noted in 2015:

 

We have, however, agreed to make a small change to the Job Search Workshop in response to one result in the Mock Interview direct assessment data. On item 4, “ability to present expertise in the student’s teaching fields,” 28% of faculty responses scored the interviewee in the “barely meets” or “fails to meet” range. In our own recent job searches, faculty committees have repeatedly noted that weakness in this area (at actual MLA interviews) is surprisingly common and regularly serves as a basis for eliminating several candidates from the campus visit pool. The Job Search Workshop will accordingly stress this area of interview preparation even more strongly than we have in the past; mock interview committees, in the post-mock-interview de-briefing period, will likewise stress this aspect of interview preparation and review with the mock-interviewee the strengths and weaknesses of his or her performance.

 

“Thorough preparation about the school and program that is posited as doing the interview”

  • exceeds / meets 56%; barely meets, fails to meet 44%
  • exceeds 17%, meets 39% , barely meets 28%, fails to meet 17%

 

As we noted in 2015:

 

We decided that another low-scoring area in the mock interview questionnaires doesn’t merit increased focus: faculty judged the performance of 44% of interviewees at the “barely meets” or “fails to meet” level in area 5, “thorough preparation about the school and program that is posited as doing the interview.” After discussion, we decided that this is an allowable lapse in the mock interview context: it’s not clear that students should spend the several hours of work necessary to develop impressive expertise about the curriculum, the student body, and departmental faculty research and publications [for a mock interview], though in the Job Search Workshop we will continue to indicate that, for real interviews, such preparation is essential.

 

 

MA indirect assessment: exit survey

 

Qual paper

  • very valuable/valuable 67%; not very valuable/not valuable 33%
  • very valuable 25%, valuable 42%, not very valuable 17%, not valuable 17%

 

These  figures (for our 4 year window) represent an improvement from our 2013 and 2014 two year window. Indeed, in 2015 and 2016, of the 4 students who completed the Qualifying exam, 2 found it very valuable, 2 found it valuable, while 0 found it not very valuable or not valuable. We will, however, continue to monitor student responses to the Qualifying exam. As we noted in 2015:

 

One response in the PhD exit interview indirect assessment instrument may merit continued attention, though the faculty did not yet come to any decision about it: the only facet of the program about which students seemed conspicuously lukewarm was the preparation and submission of the Qualifying Paper (for admission to the PhD level of the program): 50% of respondents (4 of 8) rated these activities “not very valuable” or “not valuable.” This bears further watching.

 

 

Other changes to the program (not in direct response to our formal assessment activities):

 

Program design:

 

The time-table for our MA degree is somewhat out of line with national norms, as a result of our heavy teaching load. Our GATs ordinarily teach 2 sections of freshman writing per semester, and 6 units of graduate course work (a full-time load for  GATs) is the norm in our program. Our 30-unit program therefore normally requires 5 semesters to complete.

 

As a result, students going on to the PhD in our program formerly spent more semesters looking forward to the MA breadth-oriented written and oral exams than the national norm. In Spring 2016, the literature faculty therefore voted to de-couple the timing of the MA breadth exam from the completion date for the MA program. Students now take the MA examination during their 4th semester in the MA program, though they do not typically complete the requisite 30 units till their 5th semester. This change frees up an additional semester of study for concentration on intended sub-fields of specialization for those students who have passed the MA exam and are going on to the PhD (a majority of our students). In order to facilitate this timetable change, we agreed to reduce the examination reading list from 62-65 works to 42.

 

 

 

 

 

 

 

 

Updated date: Thu, 06/08/2017 - 15:05