You are here

Graduate Interdisciplinary Programs: Statistics

Overview: 

The University of Arizona has chosen to coordinate its graduate education in statistics through a Graduate Interdisciplinary Progam in Statistics. This approach, unique to Research I Universities, has the advantage of flexibility in meeting the rapidly evolving research needs of a land grant University. This structure enables Statistics students to design and complete research grants that bring the science of statistics to a broad array of disciplines, while at the same time creating statistical theory and methods suitable for modern needs.

The Interdisciplinary Program in Statistics meets the wokforce and research needs through its Certificate Program, Master's degree, Accelerated Master's degree, a Doctoral minor, and a Doctoral program that has both a traditional track and an informatics track. The Graduate Committees play a highly interactive role in working with students.

Program Mission:

The mission of the Graduate Program in Statistics is to provide an environment whereby students become independent researchers and practitioners who make significant contributions at the forefront of knowledge across disciplines that rely on statistical thinking. By merging data science approaches with practical innovation, our students are prepared to make fundamental advancements in both statistical theory and statistical methodology. Program members are dedicated to bringing theoretical, methodological, and applied expertise in statistics through course offerings, student mentoring and advising, and research collaborations. This results in an extensive coordination across the campus' statistical curricula, course offerings, student involvement, and in the overall support of the University's mission to educate and train the next generation of data scientists.

 

Expected Learning Outcomes: 
1.      Student demonstrates understanding of the key concepts in the theory of probability and statistics and can communicate that understanding through a well-constructed theoretical argument.

2.      Student demonstrates understanding of the key concepts in the statistical methodology and can communicate that understanding through effective experimental design and sophisticated use of statistical and computational tools.

3.      Student develops creative and innovative research ideas and approaches that can further the body of statistical knowledge and contribute to significant advances in the intended field of application.

4.      Student clearly communicates statistical ideas, both written and oral, and adapts the presentation to be suitable for the intended audience.

5.      Student can describe statistics research and the impact of this research in the context of a broad discussion of the application of statistics in the given field of application.

 

Assessments are scored 1 (low) to 5 (high) by each individual faculty member present at the activity. The student also completes a self-assessment.

Assessment Activities: 
Regular or Recurring Activities
GPA Requirement
All students are expected to maintain a grade point average greater than or equal to 3.0, as calculated over all courses taken for a letter grade, with particular emphasis on the core courses in the M.S. or Ph.D. curricula.
 
Meetings with Program Chair or Graduate Admissions Chair
One-on-one meetings with the Program Chair or the Graduate Admissions Chair until a student's advisor has been selected.
 
Meetings with the Graduate Advisor
Once a graduate advisor has been selected, regular meetings with the advisor are used to assess and ensure progress towards a student's successful completion of the degree or certificate program.
 
Special or Occasional Activities

For Master’s students, we make assessments at

a)     Written qualifying exams (for students who take the non-thesis option)

b)     Annual progress report

c)      Finals oral (for students who write a thesis)

 

For Doctoral students, we also include

d)     Committee meetings

e)     Written comprehensive exam

f)       Oral comprehensive exam  

g)     Scholarly presentations

 

For those who teach, we also include

h)     Classroom presentations    

Not every learning outcome is assessed at every assessment activity, but rather we use the following table.

 

Assessment Outcomes

1

2

3

4

5

Written qualifying exams

*

*

 

*

 

Classroom presentation

 

 

 

*

 

Committee Meeting

 

 

*

*

*

Annual review

 

 

*

*

 

Written comprehensive exams

*

*

*

*

*

Oral comprehensive exam

*

*

*

*

*

Scholarly presentation

 

 

*

*

*

Final Oral

 

 

*

*

*

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Assessment Findings: 

Master’s Program

2017 mean assessment scores, MS program Year 1

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

3.8

3.1

N/A

Written Qualifying Exams

4.0

3.8

N/A

3.8

N/A

 

2017 mean assessment scores, MS program, Year 2

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

no data

no data

N/A

Written Qualifying Exams

4

4

N/A

3

N/A

 

2017 mean assessment scores, MS program, Year 3 and above

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

3.25

3.75

N/A

Written Qualifying Exams

2.3

1

N/A

2.5

N/A

 

 

Doctoral Program

2017 mean assessment scores, PhD program, Year 1

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

2.6

3

N/A

 

2017 mean assessment scores, PhD program, Year 2

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

3.67

3.33

N/A

Written Qualifying Exams

3.78

3.6

N/A

3.78

N/A

 

2017 mean assessment scores, PhD program, Year 3

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

4.1

3.75

N/A

Committee Meeting

N/A

N/A

4.5

4.25

4

 

2017 mean assessment scores, PhD program, Year 4

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

4.33

4.33

N/A

Written Comprehensive Exam

4.33

4.33

4.5

3.67

4.5

Oral Comprehensive Exam

4.5

4.75

4.5

4

4

 

2017 mean assessment scores, PhD program, Year 5 and above

Assessment

Outcome 1

Outcome 2

Outcome 3

Outcome 4

Outcome 5

Annual Review

N/A

N/A

3.9

4.1

N/A

Committee Meeting

N/A

N/A

4

4

3

Oral Comprehensive Exam

4.28

4.5

4.3

4.1

4.05

Final Oral

N/A

N/A

4.2

4.1

3.9

Classroom and Scholarly Presentation data were not available.

 

Program-Level Trends

By breaking down assessment scores by program and year, we are able to assess whether students perform as expected given their program status (i.e., fifth year Ph.D. students should be scoring 5s on assessment outcomes, while first year students are not expected to meet this standard). The data tables above break down assessment scores by event within program and year groups.

While the data suffers from small sample sizes, there are a few detectable trends that merit consideration:

  1. Absence of positive relationship between Annual Review outcomes and M.S. student year.
  2. Presence of positive relationship between Annual Review outcomes and Ph.D. program year, except between years 4 and 5 and above.
  3. Low Written Qualifying Exam outcomes for 3rd year M.S. students.

 

 

Student-Level Trends

The Annual Review assesses students via progress reports, providing an opportunity to track individual student progress through the program. As of May 2017, only two years of assessment data were available. However, this limited dataset indicates that student scores are not improving in a pattern expected based on the assessment prompt, in which evaluators are asked to use the same benchmarks for beginning and advanced students. The assessment form explicitly states that “low scores for beginning students will simply be interpreted as a reflection of their understanding at the beginning of the program… [and] high marks for advanced students would indicate success in achieving our learning objectives and outcomes.” Of the 10 sets of reports with both 2015 and 2016 data, only two students improved their scores over time, and three students scored slightly worse over time (data not shown).

 

Student and Faculty Score Congruity

All events are assessed by both the student and faculty committee members. There is a trend for students assign higher scores to themselves across all learning outcomes, compared to faculty members. In most cases these differences were small (1 point across 1-2 outcomes), but in a few cases, there was a significant discrepancy between student and faculty scores, both in events without committee interaction (Annual Reviews, Qualifying Exams), and in events that allow faculty feedback (Oral Comprehensive Exams and Committee Meetings).

Change in Response to Findings: 

Program-Level Trends

  1. Absence of positive relationship between Annual Review outcomes and M.S. student year.

The absence of a positive relationship between assessment scores in program year, particularly in the two-year M.S. program, suggests that there is not an improvement in performance between students in different years of the programs. In theory, outcome assessment scores should increase by program year. To determine whether there is an issue with student improvement, the Statistics Executive Committee will seek feedback from program advisors. If the failure of students to improve is verified, the program will need to reconsider its core requirements.

The alternative explanation for the lack of a positive trend is evaluation error; assessors may be failing to assess students based on graduation benchmarks as instructed on the assessment form, resulting in a plateau of scores across program years. In the upcoming academic year, the Statistics Executive Committee will seek feedback from program faculty and students to improve the format of the assessment, hopefully ensuring future adherence with the assessment protocol (grading students based on expected final learning outcomes, not where they are in the program).

  1. Presence of positive relationship between Annual Review outcomes and Ph.D. program year, except between years 4 and 5 and above.

As before, the unexpected results of the assessment could again be a problem of assessment error.  However, because the trend is positive except between the fourth year and beyond, it suggests that the assessment is detecting that students who have been in the program for at least five years are not performing to the degree expected by committee members. A possible explanation is the failure of advanced Ph.D. students to wrap up their degrees in the 2016 academic year. In AY2016, the program had few degree completions, with many students pushing back their graduations by at least one semester. In 2017, the program degree completions jumped to 8 M.S. graduates and 8 Ph.D. graduates. Data from the 2017 Annual Review will provide more information on whether the abatement of advanced Ph.D. student performance remains. If it does, the Statistics Executive Committee will create a plan to ensure students are completing final degree requirements in a timely manner.   

  1. Low Written Qualifying Exam outcomes for 3rd year M.S. students.

Lower Qualifying Exam learning outcome scores in students further along in the program indicate that the program has not been adequately responsive to the needs of students who have previously underperformed on past exams. In all cases, 3rd year M.S. students taking Qualifying Exams are doing so for the second time, after failing the first time. The Statistics GIDP has a two-chance policy for the Qualifying Exams, so that students who fail the exam for a second time are required to complete a MS thesis, or leave the program.

Third year M.S. students should perform at least as well as 1st and 2nd year students, because a) they are further along in the program, and b) they have experienced the test environment and know how to prepare. However, both the assessment data and testing outcomes illustrate that this is not that case (multiple 3rd-year students have failed the Qualifying Exams twice and needed to complete a thesis). The program needs to be more responsive to students who have failed the exam the first time in order to combat this trend.

In the 2017 Academic Year, the first official Statistics GIDP Qualifying Exam study group was initiated. This student-led group met weekly in the Statistics Community Room to prepare for the May 2017 Qualifying Exam. Pending strong outcomes on the exams, the program will commit to the continuation of this student study group, including room reservations, advertising, student leadership recruitment, and student-requested resources. The hope is that this study group will strengthen student Qualifying Exam performance across the board, and 3rd-year students will have the resources and motivation they need to excel in their second exam.

 

Student-Level Trends

Data on student-level trends at this point remains very preliminary – only ten two-year sets student Annuals Review outcomes were available, and there was not a significant trend across the data. However, the assessment model does not seem to be producing the expected positive trend between outcomes and program year.

The absence of a positive trend in Annual Review outcomes in consecutive years indicates either that students are not improving, or that the assessment is being incorrectly completed. More information will become available with the 2017 Annual Reviews, which will be completed in June 2017.  The Statistics Executive Committee will seek feedback from program advisors on student progress over time, and will work to improve the format of the assessment, hopefully ensuring future adherence with the assessment protocol (grading students based on expected final learning outcomes, not where they are in the program).   

 

Student and Faculty Score Congruity

The pattern of incongruity between student and faculty assessment scores suggests the following:

  1. The program needs to be clearer in its communication of expected learning outcomes to students.  
  2. Graduate committees should provide more explicit feedback in events that allow faculty feedback (Oral Comprehensive Exams and Committee Meetings).

A student should not come out of a Committee Meeting and give themselves 5s across the assessment outcomes, when their committee members award them 3s and 4s. The Statistics Executive Committee will emphasize the need for committee members to review and provide feedback on learning outcomes during assessment activities. To address student comprehension of learning outcomes, their review with be incorporated into Program Orientation, and further detail will be added to the program handbook.

 

The action items involving faculty (assessing student trajectories, improving the assessment format, improving adherence, increasing committee feedback) will be accomplished at a Statistics GIDP Assessment Meeting in the Fall 2017 semester. Student objectives (improving the assessment format, improving adherence) will be accomplished at Fall 2017 Program Orientation, which includes new and current students.

Updated date: Fri, 05/26/2017 - 13:36