You are here

Entomology and Insect Science Graduate Interdisciplinary Program

Overview: 

Insects make up most of multicellular life on earth, contribute to many of the major human and agricultural plagues, and provide superb model systems for studying all levels of biological organization. The University of Arizona (UA hereafter) is exemplary among U.S. universities in its wealth of scientists from diverse fields studying insects. The Graduate Interdisciplinary Program in Entomology & Insect Science (GIDP-EIS) capitalizes on this fact, allowing graduate students to participate in interdisciplinary education while working with this outstanding concentration of resources. The strong focus on interdisciplinary research creates a mechanism for graduate training that capitalizes on the faculty at UA dispersed across the biological, social and health sciences whose work focuses on insects. The program enables students to develop cross-disciplinary connections and bring together different aspects of insect biology. Students must minor in a program other than EIS.  The core EIS courses are mostly taught by faculty in the Department of Entomology (Table 4); in addition our students take courses from existing curricula across the university (Appendix 7). Course requirements set by the program are few, and graduate committees can waive program requirements (e.g. particular course requirements, teaching requirements or speaking requirements) under special circumstances if the committee determines they do not suit particular student goals or needs. 

The University of Arizona Graduate Interdisciplinary Program (GIDP) in Entomology and Insect Science (EIS hereafter) is one of the UA’s 15 GIDPs, and this places EIS within the Graduate College, and not within any department or college. 

The program has 33 faculty members in ten academic units across four colleges, including Biosphere 2,Chemistry & Biochemistry, Entomology, Ecology & Evolutionary Biology, Epidemiology & Biostatistics, Geography, Molecular and Cell Biology, Neuroscience, Nutritional Science, and Plant Sciences.  Twenty-three of EIS faculty are full professors, eight are associate and 5 are assistant professors.

Expected Learning Outcomes: 

The Entomology and Insect Science (EIS) Graduate Program has six learning outcomes, all scored on a five-point scale in which 1 is “well below average,” 2 is “below average,” 3 is “average,” 4 is “good” and 5 is “excellent” based on the expectations for students graduating from the program. 

1. The student demonstrates understanding of key concepts in insect biology as well as those underlying his/her general subject area (e.g. physiology, molecular biology, genomics, ecology, systematics, evolution or behavior). 

2. The student exhibits critical thinking skills to evaluate the scientific literature essential for his/her research area and articulates how his/her research fits into and/or advances the discipline.

3. The student develops creative and innovative research ideas and approaches.

4. The student uses multiple research approaches to collect scientific data related to his/her research area, and can interpret, analyze and critique his/her data.

5. The student communicates his/her research (importance, approaches taken, summary and interpretation of results) effectively through oral presentation.

6. The student can describe his/her research and express the potential impact of his/her work on society in lay terms.

 

Sample EIS Graduate Program Assessment Rubric

Assessment Activities: 

Expected Learning Outcome

Assessment Activities

Outcome 1

  • Committee Meetings
  • Oral Comprehensive Exam
  • Final Defense

Outcome 2

  • Committee Meetings
  • Oral Comprehensive Exam
  • Final Defense

Outcome 3

  • Committee Meetings
  • Oral Comprehensive Exam
  • Final Defense

Outcome 4

  • Committee Meetings
  • Oral Comprehensive Exam
  • Final Defense

Outcome 5

  • Committee Meetings
  • Oral Comprehensive Exam
  • Final Defense

Outcome 6

  • Committee Meetings
  • Annual Progress Report
  • Oral Comprehensive Exam
  • Final Defense

 

The table above indicates the assessment activities through which each learning outcome is assessed. Assessment forms are completed by faculty committee members and students (as self-assessment) for each activity. Program staff ensure the completion and submission of assessment forms through semi-annual program-wide reminders, and by following up with individuals when forms are due.

In addition to the direct measure of our expected learning outcomes through assessment forms, the program performs indirect assessment of program outcomes through monitoring of retention rates and graduate employment in the field of study. Both indicators reveal positive learning outcomes. Finally, the Program Chair conducts Exit Interviews at the conclusion of each student's program of study. These interviews provide student feedback on program strengths and weaknesses.

Assessment Findings: 

The direct assessment forms were used for the first time in the 2013-2014 academic year.  In February of 2016 we had 18 “assessment events” to evaluate.  These include MS assessments (1 first year committee meeting, 3 second year committee meetings, 1 third year committee meeting, and 3 final defenses), and PhD assessments (2 second year committee meetings, 1 third year committee meeting, 5 fifth year committee meetings, 1 oral comprehensive exam, and 1 final defense).  With reminders from the Progam Chair and the Program Coordinator, assessments are becoming more consistent; variation in participation both between events and within events has decreased in the past year. This trend is expected to continue as this activity becomes more routine. Another previous observation - that assessors tend to assign homogenous scores - also seems to be decreasing over time, indicating that there may be more thought given to the individual scores. 

We summarized the scores for two types of events for which we had a sample size of at least five students with assessments.  These were: PhD committee meetings for students in their fifth (or greater) year, and MS final defenses. Mean scores on the learning outcomes (with 1 representing “below average” and 5 representing “excellent”) are presented for these two events below in Table 17.

Table 17. Mean learning outcome scores (LOS).  Data is from two events in 2013-2016 (MS defense and 5th year PhD student committee meeting) in which we had a sample size of at least five events.  Each event was for a different student (1-5 faculty reporting for each student/event).  Note that for self-assessments, there was only one PhD student who performed a self-assessment at their committee meeting.

 

Event

Reporting

LOS 1

LOS 2

LOS 3

LOS 4

LOS 5

LOS 6

MS defense

Graduate committee, n=5 committees

4.67

4.67

5

4.67

4.33

5

MS defense

Student (self-assessment), n=5 students

4.5

4

4.25

4.25

4.25

4.5

Annual committee meeting (5th year)

Graduate committee, n=6 committees

3.6

3.5

3.5

3.8

3.8

3.5

Annual committee meeting (5th year)

Student (self-assessment), n=1 student

4

4

5

5

4

5

 

It is too soon to put very much weight on even these data, given that a sample size of five is not very large, and there is variation among student performance that may obfuscate program strengths and weaknesses. However, for the MS defense data, one can say that the program faculty were very satisfied over all with student learning, with highest scores going for “innovation and creativity” and “communicating research in lay terms.” The MS student self-assessments were quite similar.  The slightly lower scores for 5th year PhD students may be counter-intuitive, but may reflect the instructions on the form (Appendix 9) to use the scoring system relative to our expectations for a student at the end of their program.  Thus a finishing MS student who has learned what we would expect of a MS student at his or her defense may score higher than a more advanced PhD student who is still not at the level we expect for someone with a doctorate from our program.

Exit interview conversations hit some familiar themes. All of the three MS students that spoke to the Chair (here identified as A, B and C) had been funded throughout their programs with a combination of program funds, TAships, and in one case, an NSF GRF, but cited the lack of guarantee of funding as a source of stress for either themselves or other students.  Student B said “I didn’t know I would have to compete for a TA outside our program.” Interestingly, in the few years that these students have been in the program, we have had close to 100% funding of our students so the reality, which is that students stay funded, is better than the somewhat troubling perception.  Student A felt that the program events (monthly lunches, summer laboratory tours) were good efforts in keeping the student community in touch, even though he felt that the students themselves could do more to this end.  Student C felt that the “cohort-building courses” of incoming students were a good way for students to get to know each other, although he said that as his MS research progressed towards finishing, he was too busy to participate in most social interaction beyond the laboratory.  Student B was especially impressed by the outreach impact of the Arizona Insect Festival, coordinated by the Entomology Department, and involving EIS grads.  “Keep doing that,” he said.  Student C thought the recruiting visit, with structured activities and very focused attention from his eventual advisor, was important in attracting him to the program.  He contrasted another university recruiting weekend in which the potential advisor talked with the prospective student for a total of 15 minutes.  Student C ended by saying that he felt very positive about the program overall, that it was apparent that the program faculty were concerned about their students, and that the students themselves had a “diverse mix of interests.”

Change in Response to Findings: 

In mid-September, 2014, nine faculty members, the program chair and program coordinator met to discuss the findings.  The conversation was wide-ranging, and because of the preliminary nature of the data, the focus of the discussion was more directed toward designing an informative, intuitive assessment system than towards making program changes in response to the findings of the assessment.  Comments were as follows.

1) Suggestion to improve the rubric for scoring learning outcomes were made.  One suggestion was to change 1 from "well below average" to “far below expectations”, 2 from "below average" to “approaching expectations” 3 from "average" to “meeting expectations,” 4 from "good" to “surpassing expectations” and 5 from "excellent" to “far exceeding expectations.”  Further suggestions along this line had to do with making the scoring rubric more concrete and detailed for each learning outcome.  As a hypothetical example, for the learning outcome 1 that refers to mastering content, 1 could mean “lacking fundamental understanding of expected content” while 5 could mean “academic professional mastery of expected content.”

2) In addition to the data presented so far, two suggestions were provided of things to look for.  a) Look at the range of values among faculty members at an event for a particular student to determine whether faculty are consistent in assessment, e.g. the difference between highs and lows in the range for a particular learning outcome is not more than 1. The suggestion was that low variation in range suggests the rubric is meaningful to the faculty and can be used to discriminate among students.  This was assessed for the 5th year PhD committee meetings described above.  For the five students and six learning outcomes (30 scores), the majority, 23, were within 1 point of each other (e.g. all scores either 2 or 3).  This was true even though all students were not uniformly scored highly.  For learning outcome 1, the range of scores was 1-2 for one student and 4-5 for another.   b) Look at the small amount of data we have so far and determine whether the scores for committee meetings in the first two years are lower than for the last few years.  Unfortunately, our spreadsheet for PhDs does not have enough data to make this comparison. 

3) The consensus of the faculty present was that we would finish the year with this system, but that in the spring we would engage faculty and students to design a system that is objective and helpful in understanding our program effectiveness.

4) The faculty at the meeting were supportive of increasing the on-campus opportunities (perhaps an additional requirement) for student presentation of their research, either through a weekly spring seminar series, or through a yearly retreat for faculty and student presentations or both.  These presentations would serve as an opportunity for wider faculty and student peer participation on learning outcome assessment.

Updated date: Wed, 08/17/2016 - 10:42