You are here

Faculty Learning Community on Program Assessment

Last Modified Date: 
03/11/2013

The Faculty Learning Community (FLC) on Program Assessment is a community of colleagues working to develop faculty knowledge of and expertise with program assessment through study, support, and individually or collaboratively-designed projects of benefit to FLC members, their departments and our university community. [pdf version]

 

FACILITATORS

Amy Kimme-Hea

Faculty Fellow for the OIA

Director, Writing Program

Associate Professor in the Rhetoric, Composition, & Teaching of English Program

Department of English

kimmehea@email.arizona.edu

 

Debra Tomanek

Assistant Vice Provost for Instruction and Assessment

Professor, Molecular and Cellular Biology and the College of Science Teacher Preparation Program

dtomanek@email.arizona.edu                

 

 


MEMBERS
Paul Blowers

Associate Professor, Chemical & Environmental Engineering

blowers@email.arizona.edu

Dr. Paul Blowers is currently an associate professor in Chemical and Environmental Engineering, which he joined in 1999.  In his time at U of A, he has been selected for the faculty award winner for Excellence in Academic Advising in 2007, was selected by the National Academic Advising Association as one of the top four faculty advisors in the US in 2008, was named the College of Engineering daVinci Circle award winner for engagement with students in 2009, was selected as the Leicester and Katrhyrn Sherrill Creative Teaching Award co-winner in 2010, along with the UA Honors College Five Star Faculty Award winner.  He was named the Carnegie Foundation Arizona Professor of the Year as one of the top 27 faculty members in the US at undergraduate institutions in 2011, and was most recently named University Distinguished Professor, the highest teaching honor at UA in 2012.

What is the challenge that my project addresses?

My project addresses both how to get faculty to provide assessment data and also how to manage the data collection through the use of D2L.

How did I pursue the challenge?

I built rubrics in D2L with the intent that these would be used to archive scores and numerical data about how well students were achieving chemical engineering's ABET required a-k criteria.  I met with faculty prior to the beginning of the semester to detail what criteria would be measured in each class and then followed up after the semester was over with another meeting.

What have I learned?

I've learned that data will not be easily pulled from D2L, but that it was still useful to have the rubrics that detailed how students were meeting the criteria and what the threshold values for success were.  Grading the assignments that were targeting the different criteria automatically led to easy assessments.

My advice for others pursuing similar assessment challenges is:

Meet with faculty more often and provide reminders of what you need and set timelines for deliverables.


Faiz Currim

Senior Lecturer, Management Information Systems

currim@email.arizona.edu

Faiz Currim is a senior lecturer with the Department of Management Information Systems (MIS). The MIS Department has developed assessment plans for their undergraduate and graduate programs. Dr. Currim received his Ph.D. from the University of Arizona in 2004, and teaches on-campus and online courses at both the undergraduate and graduate level.

 

What is the challenge that my project addresses?

Our department wanted to review and revise our student assessment materials. A key challenge we addressed in this project was thinking in terms of program-level student learning outcomes, instead of course level outcomes.  Most of us looking at assessment know what courses our students should take to be successful, and why. We also had a sense of course-level skills we want students to master.

Our project looked at higher-level outcomes across courses, from the perspective of the student. For example, instead of saying a student would learn to do specific kinds of software design in course A or course B, we abstracted that into a program-level outcome, "Understand and apply design principles in Information Systems."

How did I pursue the challenge?

During the course of the FLC meetings, we discussed outcomes in different programs (e.g., in the sciences and engineering) and also looked at exemplars from other disciplines. This, combined with feedback from the FLC group and Deb Tomanek, proved very helpful. We were able to develop and refine our student learning outcomes as well as map them to course-level outcomes (which could then be measured in tests or assignments).  One of our goals was to look at the assessment process as a feedback loop. In other words, to develop a data collection mechanism within individual courses that would feed into overall student learning outcomes. Examining the level of student performance on these outcomes would tell us whether we needed to provide students with more instruction in specific areas we believed were important.

What have I learned?

One of the important pieces of learning was understanding what makes a good student learning outcome. A student learning outcome (SLO), unlike a TCE, does not evaluate courses or faculty members. Instead, the SLO seeks to identify what is important for our students to learn, and how can we measure whether they learned it (preferably in a quantitative and separable way) in courses.

My advice for others pursuing similar assessment challenges is:

A good place to begin is to look at examples available on the assessment.arizona.edu site. This helps someone starting afresh to see how SLOs can be developed and measured (across courses). Once the outcomes are in place, it is helpful to simplify outcome measurement within courses. The purpose is to reduce additional burden on faculty (which ultimately helps faculty participation in the assessment process). In the ideal scenario, an outcome can be measured through questions in existing course assignments or tests. In addition, if those assignments and test questions are easily identified and able to be tracked separately, data collection can be facilitated. It is helpful to have at least one instructor who is willing to ‘beta test’ the process to demonstrate the process to other faculty. It is also important that the department is supportive of the assessment efforts. The beta test can help to achieve this as it provides a concrete example of how data can be collected  and used to improve the program and benefit students.


Wendy Davis

Lecturer, Animal Sciences

wdavis@ag.arizona.edu

Wendy Davis is a faculty member in the College of Agriculture and Life Sciences’ Department of Animal Sciences and Associate Coordinator for the Race Track Industry Program (RTIP). 

Established in 1973 at The University of Arizona, the RTIP is world renowned for its curriculum focused the pari-mutuel racing and equine bloodstock industries. Program graduates - including two Kentucky Derby winning trainers - represent many of the who’s who in racing today.  The educational reach goes beyond the UA campus and students as each year, the RTIP faculty and students host one of the largest racing conferences on the international calendar; delegates representing 20 countries and six continents attended the most recent renewal in December of 2012.

In addition to her participation in the Faculty Learning Community on Program Assessment, Davis serves on CALS’ Curriculum and Assessment Committee as well as Animal Sciences’ Equine Steering Committee, Scholarship/Curriculum Committee and Peer Evaluation Committee. She is an advisor for both RTIP students as well as those completing other animal science-based degree options and teaches or co-teaches several courses in the RTIP.

The challenge that my Faculty Learning Community on Program Assessment project addresses is getting the initial “buy in” and subsequent participation in the assessment process by faculty.  The main focus of this project is undergraduate curriculum and the model uses a department with fifteen campus-based faculty members.

My approach to this challenge was to break the process up into very small steps with support at each step.  Once created, this step-by-step process doesn’t add a significant time obligation to any one faculty member and reduces the stress related to “Now what is it exactly that I’m supposed to do?” 

This process can be compared to a paint-by-number picture.  As each color (or step in the assessment cycle) gets completed (the various assessment activities), the picture becomes more clear (during data analysis). If the end result is a masterpiece, then leave well enough alone!  But the odds are that there will be “too much blue” or “not enough green” in some areas (as identified with assessment findings) and this process will suggest where the palette should be amended  (make programmatic changes) for a better composition the next time through (and student learning outcomes will be better met).

The following is a very brief timeline snapshot the project:

1. Update the unit’s desired outcomes to make sure they are accurate, appropriately worded and agreed upon by faculty.

2. Create a brief survey for faculty to identify which outcomes they believe are addresses in their coursework or the student learning opportunities. 

3. Given the information gained from the survey, select the outcome(s) and courses/opportunities that will be used for the current cycle of assessment. 

4. Once selected, support those faculty members in identifying and creating the appropriate instrument (many times a custom-designed rubric) to be used to measure the outcomes.

5. Evaluate the findings and take the information back to the faculty to determine if any changes are needed to meet the stated learning outcomes.

Most of the above steps are ideally presented and discussed in faculty meetings or other appropriate group settings; in this case, they were carried out with faculty on an individual basis.

What I’ve learned to this point is that once the mystery is taken out of the process and it is made clear that this is not “reinventing the wheel” and doesn’t put another burden on faculty they are supportive and even interested in the outcomes.

My advice to others is to assure the faculty that they are already “painting” many colors on the assessment canvas and that all we are doing is organizing the information in a logical way. Probably the biggest issue is convincing the faculty that finding something that isn’t working as planned or expected isn’t a negative mark on the faculty or department but rather an opportunity to make a positive change.

 


Ryan Foor

Assistant Professor & Director of Graduate Studies, Agricultural Education

rfoor@email.arizona.edu

Ryan M. Foor is an Assistant Professor and Director of Graduate Studies in the Department of Agricultural Education in the College of Agriculture and Life Sciences.  He teaches coursework in teacher preparation, agricultural communications, agricultural literacy, and leadership.  His research interests include teacher mentoring and induction.

 

What is the challenge that my project addresses?

My project focuses on assessing the graduate programs in our department.  We have two degrees, a master of science and a master of agricultural education (a practitioner based degree).  Each degree has two option areas.  The master of science degree includes a research option and a professional agriculture option.  The professional agriculture option is designed for distance students.  Within the master of agricultural education degree, the options include a career and technical education (CTE) option and a practitioner option.  The CTE option is designed for students pursuing certification to teach high school agriculture and the practitioner option is designed for practicing teachers.  While our programs are small, there is much diversity across the options in terms of the students pursuing the degree options and the outcomes for each option area.  Therefore, our challenge was to design outcomes and assessment activities for each of the four degree options. 

How did I pursue the challenge?

My first consideration was to gather input from all faculty in our department.  We are a small department, with only five faculty members, so it was relatively easy to get everyone’s input at a faculty meeting.  My approach was to take the opportunity to look at the big picture of our programs by creating a logic model for each option area.  During the faculty meeting, I gave each person three index cards for each option area (12 cards total), labeled long-term, medium-term, and short-term.  For each option area, we started with the long-term card and I directed faculty members to create a list of the long-term impacts or consequences we seek for that option area.  We continued creating medium-term outcomes or actions for the degree option, and finished with the short-term outcomes or learning activities.  Then, I recorded the outcomes for each degree option where we were able to examine all outcomes and revise collectively.  From these mutually agreed outcomes, I created a logic model for each degree option, and crafted the short-term learning outcomes for use in program assessment.  The logic models and learning outcomes were sent to the faculty for a final review and revisions were made.  With the learning outcomes established, I identified existing learning activities in our graduate programs to serve as assessment points.  Finally, I created rubrics for assessment activities to measure the learning outcomes (e.g. rubric for assessment of the master’s thesis and oral defense).  I shared the rubrics with the faculty for review and made revisions.  Data collection is underway for the 2012-2013 academic year.

What have I learned?

I have learned that program assessment does not have to be as difficult and complex as it initially sounds.  At first, I was aiming to collect and report data for each individual student in each degree option.  What I learned from the assistance provided by the Office of Instruction and Assessment is to “look at the bigger picture with the program” and report assessment data collectively for all students in the program.  Instead of tracking each student, we will take a collective approach to reporting data for each learning outcome.

My advice for others pursuing similar assessment challenges is:

Get the faculty on board early.  Creating a program assessment committee at the department level (especially for departments with a larger number of faculty) will help facilitate the process.


Herman Gordon

Associate Professor, Cellular & Molecular Medicine

flash@arizona.edu

  • Harvard AB in Biochemistry
  • CalTech PhD in Developmental Neuroscience,
  • Postdocs at the MRC and at UCSF
  • Faculty at UACOM since 1991
  • Current research interests in teaching scientific and medical problem solving

The admissions process at the College of Medicine is a challenging enterprise.  Thousands of applications are reviewed to produce a matriculating class of about 115 students each year.  The Admissions Committee, consisting of 11 faculty and 5 medical students, spends over 200 hours each reviewing and voting on applications.  While successful in choosing excellent students each year, there was a clear need for a formal assessment system.  

During this past year, the Admissions Committee developed and used an assessment system based on 11 attributes considered most desirable in entering students.  These included academic preparation and clinical exposure as well as "distance travelled".  Each attribute was scored 1 to 5 for each candidate reviewed.  In addition, each candidate was given a "gestalt" score.  The attribute scoring provided a means by which to rank order every student considered as well as to  focus discussion.  

In the future, the attributes scoring will be used in conjunction with student outcomes to prioritize those attributes which are best predictors of later success.  In this way, the assessment process can be improved in an iterative way.


Chris Johnson

Assistant Professor, Educational Technology

cgj@email.arizona.edu

Chris Johnson is an Assistant Professor in the Masters of Science in Educational Technology program at the University of Arizona, Sierra Vista.  The program is an on-line program that works with K-12, corporate and higher-education instructional technologists and trainers with the Military Intelligence School at Fort Huachuca.

 

My FLC project originally focused on developing Program Assessment for the Educational Technology Program.  However, as I began work, I became aware that Dean Shockey was interested in infusing Program Assessment across University of Arizona, Sierra Vista. An Assessment Committee, chaired by Dr. Dieter Steklis, with Dr. Aaron Tesch and myself, was formed and we are now focused on three pilot projects. The first is an upper-division Bachelor of Applied Science program on Family Studies and Human Services. The second is our shared program in Psychology. And the third is the graduate program in Educational Technology. The Assessment committee is developing a model for incorporating Program Assessment across UAS.  As part of this work, we are currently developing an assessment of critical thinking across a number of core UAS courses based on the Critical Thinking Rubric developed by the Association for American Colleges and Universities.

For the Educational Technology Program, we began by developing new Expected Learning Outcomes that are based on the standards of the Association for Educational Communications and Technology. We are just completing a curriculum map that aligns these new ELOs to our courses and our exit portfolio.  We will then identify assessment activities for data collection.

We are also developing an exit interview to be administered at graduation and a follow-up survey for our graduates to be administered a year after graduation. Both of these new assessment activities will be tied to our ELOs.  Once we have completed the curriculum map, program faculty will meet to identify an ELO for initial data collecting and analysis to test our process.


Carl Maes        

Associate Dean, Optical Sciences

cmaes@optics.arizona.edu

Carl Maes completed his PhD in Optical Sciences from the University of Arizona. He was previously an Associate Professor at the United States Air Force Academy, Colorado, where he was a member of the Lasers and Optics Research Center. He is the Associate Dean of Academic Programs, College of Optical Sciences.   His research spans from adaptive optics, lasers, to studies of the fundamental nature of light.  He has 13 years of teaching experience in undergraduate and graduate courses in physics and optical sciences.

 

What is the challenge that my project addresses?

Assessment of PhD in Optical Sciences (and more broadly Physics and some Engineering disciplines)—I really wanted to answer the question: how does one become a scientist and how might it be assessed?

How did I pursue the challenge?

I conducted a lot of discussions at FLC meetings, Faculty meetings, Faculty Retreats, Graduate Curriculum Committee meetings, and did a web search of graduate program assessment in Physics and Electrical Engineering at other institutions.

What have I learned?

Working with faculty is a complex dynamic that significantly constrains how assessment plans are developed and implemented. 

My advice for others pursuing similar assessment challenges is:

Developing and implementing a program assessment plan will depend greatly on whether there is institutional and department head level support to take the time to truly develop and implement a value added assessment plan with broad support. 

The process can be enabled by a faculty  committee that needs comparable authority and numbers as other high priority department committees.   This committee would take about two years to collect data: consider various PhD program milestone events, collect student, faculty, alumni, 1st and 2nd employers, and other relevant constituents’ feedback on program quality and how to measure it, meet with faculty from other departments and institutions to discuss how a programs standard of quality is known/measured, consider NRC, etc. program rankings, and then propose an assessment plan to the rest of the faculty.  This dedicated and thoroughly researched approach by a team of faculty would stand a good  chance of getting broad faculty consensus.  To gain faculty support, the  assessment plan must be  relevant, actionable, and with direct implications affecting metrics used for national level institutional rankings and accreditation, faculty promotion and tenure, and with both faculty and students’ notions that the effort is  worth supporting.  In effect, this version of assessment would carry the same significance as for example, a publication in a respected journal.  Assessment in this manner would not merely be an additional duty of select staff/faculty, but an important part of what we do as a faculty.

 


Katrina Miranda

Associate Professor and the Assistant Chair of Education and Assessment

Department of Chemistry and Biochemistry

kmiranda@email.arizona.edu

Katrina Miranda joined the Department of Chemistry and Biochemistry in 2002 where she is currently an associate professor and the Assistant Chair of Education and Assessment. Her research program focuses on defining the chemistry of physiological and pathophysiological processes involving nitrogen oxides and on producing nitrogen oxide donors as analytical and pharmacological tools. She teaches chemistry courses at both the undergraduate and graduate levels, and her awards include receipt in 2010 of the University of Arizona GPSC Achievement Award for Outstanding Mentor of Graduate/Professional Students.

Why is the challenge that my project addresses?

The primary question was what should a person with a BS in chemistry and biochemistry be able to do upon graduation? Of course this challenge leads to others, not the least of which is convincing faculty that assessment is a valuable process and must be a component of the educational efforts of the faculty as a whole and of individual instructors.

How did I pursue the challenge?

This involved many meetings: at faculty meetings and retreats, with the Undergraduate Program Committee, with a newly instituted Lab Curriculum Committee and with both FLC and the OIA. The Department of Chemistry and Biochemistry is the product of a departmental merger several years ago. We have since been making efforts to coordinate both the major and minor programs offered within the department. These discussions led to a number of programmatic changes, drawing from strengths of both the chemistry and biochemistry programs, and also to creation of new courses that better serve our majors.

The primary question of my project led to a top down approach to assessment. From the basis of the question of what should a person with a BS in chemistry and biochemistry be able to do upon graduation, we have been discussing the content of lecture and lab courses. The lab courses at the 400 level in particular offer the opportunity to assess student competency.

What have I learned?

Programmatic assessment can be a daunting process but is worth the time invested. Small, positive changes at the beginning can lead to faculty buy-in and trust in the process. As an ongoing process, assessment challenges the status quo, but hopefully leads to an understanding of the need to continue to evolve our curricula and programs, just as our science does.

I was surprised to learn that programmatic assessment of chemistry and biochemistry programs is not being widely pursued. I was hoping to utilize existing assessment tools from other departments, but was rather disappointed to find that these did not exist, at least in readily adaptable formats.

My advice for others pursuing similar assessment challenges is:

Talk to the OIA as soon as possible. Be satisfied with small changes. Collect those interested in programmatic assessment into a working group.


Teresa Polowy

Department Head, Russian and Slavic Studies

tpolowy@email.arizona.edu

Teresa Polowy is Head of the Department of Russian and Slavic Studies which offers a Russian major and minor and a Masters in Russian. Her areas of scholarship include contemporary Russian women's writing, twentieth and twenty-first-century Russian literature, cultural studies, literary translation, and the problem of alcoholism and gender in Russian literature. She engages in ongoing professional development which is not only personally, but administratively beneficial, keeping her abreast of issues in education and allowing her to lead by informed example. Her ability to better guide departmental assessment efforts will continue to improve since being selected to participate in the two-year Faculty Learning Community on Assessment funded by the Office of Instruction and Assessment. 

 

What is the challenge that my project addresses?

Like many of us, my challenge was educating, convincing, demonstrating for the faculty members of my department the reasons and the value of assessment. Once faculty understood that assessment  addresses student learning rather than faculty teaching, and that we, as a faculty within our curriculum are already doing many important types of assessment, response was positive.

One challenge was to put SLOs in place for all 4 years of Russian language learning for Russian majors.

The largest challenge was agreeing on an assessment measurement for one or two of the SLOs given for our undergraduate major in Russian and MA in Russian. We want to find a measure to assess our SLOs for language proficiency at the end of four years and at the end of the two years of the MA

How did I pursue the challenge?

An MA student is doing an Internship under my direction which is proposing SLOs for each of the four years of the Russian language learning in the major. Faculty has been impressed and positive about this work.

As department head, I called an assessment workshop for a Saturday. Faculty was given plenty of lead- time throughout the fall semester 2012, and in Feb. 2013, we held the day-long workshop.

I asked each faculty member to think of the one thing that s/he would like to change in terms of student learning at the undergraduate and graduate programs.

Then I asked each to think of how s/he would measure the change that s/he wanted to institute.

Commonality was reached since many desires for change were interrelated. Discussion resulted in agreement on specific assessment measures for the language proficiency SLO at the end of in each program.

What have I learned?

I have learned to value and revisit the training in Student Centered Learning (SCL) that I received in 2005-06 as a member of the Tri-University faculty group on Student Centered Learning sponsored by ABOR. Assessment,  like SCL, has everything to do with effective student learning and ensuring that faculty keeps that goal front and center. Discussions as a teaching team about types of assessment measures and articulation issues, are invigorating, stimulating, and good for faculty morale.

My advice for others pursuing similar assessment challenges is:

Persist and try to educate your colleagues, in a gentle way, about the value of assessment.  Know your colleagues and what they value – that is, do they want to capitalize on success, work for improvement,  or some combination of both when discussing assessment measures and SLOs. Do not try to be over-ambitious in terms of the scope of the assessment measures your unit is implementing to demonstrate a SLO.

 


Claudia Stanescu

Senior Lecturer, Physiology

stanescu@email.arizona.edu

 

Dr. Claudia Stanescu is a senior lecturer at the University of Arizona in the Department of Physiology, College of Medicine. Dr. Stanescu earned her doctoral degree in Physiological Sciences at the University of Arizona in May 2005.

In addition to teaching, Dr. Stanescu serves as the Associate Director for the Physiology Undergraduate Major and the Human Anatomy and Physiology course coordinator. She joined the Faculty Learning Community on Program Assessment in 2011 and has become the Assessment Coordinator for the Department of Physiology.

The Department of Physiology Assessment Plan was developed by Dr. Anne E. Atwater, a now Emeritus faculty member, who was the Director of the School of Health Professions. When I joined the FLC on Program Assessment, I discovered that the department had continued to collect some data each semester but the data was not being analyzed and presented to the faculty for feedback on a regular basis. My project as an FLC member was to revise the assessment plan and to get faculty feedback on the student learning outcomes so the plan reflects the current program and faculty. In addition, my plan is to continue to collect data, analyze it and present it to the faculty for feedback. The revised student learning outcomes and assessment plan are outlined below.

Student Learning Outcomes

Physiology involves the study of how living systems function, from the molecular and cellular level to the systems level, and emphasizes an integrative approach to studying the biological functions of the human body.  Students who complete the undergraduate major in Physiology should have a firm grasp of basic physiological principles and their application in real-life situations.

Content mastery should include knowledge of:

· Cellular function: the functions of cells; how cells grow, interconnect, and interact

· Organ systems function: basic concepts and mechanisms underlying the physiology of each organ system; interaction of all organ systems, foundation in all organ systems and depth in selected organ systems

· Integration of physiology from the cellular and molecular level to the organ system and organismic level of organization

· Comparative physiology: comparison of physiological characteristics of various organisms

· Current topics in physiology

· Career options in physiology

· Experimental design in physiology

In addition to acquiring knowledge about physiology, students who complete requirements for graduation should demonstrate mastery of the following skills:

· Understand scientific literature: the ability to critically read and evaluate scientific literature; ability to distill media information

· Oral communication skills: use effective oral communication in presenting and interpreting scientific information

· Written communication skills: use effective written communication in presenting and interpreting scientific information

· Physiology laboratory techniques: hands-on experience in selected laboratory techniques

· Quantitative evaluation of data: data collection, analysis and interpretation

Assessment Plan

· Faculty will mark the outcomes met in their courses on an easy to use grid and give feedback on the student learning outcomes.

· The newly vetted Program Student Learning Outcomes will be added to the Exit Survey given to Physiology graduating seniors.  Data will be collected from students anonymously as they come in to get their senior picture for the graduation ceremony.

· At the end of each spring semester, the department will organize a Focus Group that will consist of graduating seniors to collect qualitative data related to the program.

· Embedded exam questions will be used for assessment of student knowledge in the different aspects of Physiology. Integration from cell to system was a major topic covered in our core courses (PSIO 201, PSIO 202, PSIO 303A, PSIO 303B) according to the faculty survey. Integration will be assessed using embedded exam questions in the core courses.

· The information from the Exit Survey, the Focus Group and the data collected from embedded questions will be discussed in the departmental Curriculum Committee and presented annually at a faculty meeting.

· Feedback from the faculty will be discussed in the Curriculum Committee and changes to the program will be made as needed.


Hal Tharp

Associate Department Head, Electrical &  Computer Engineering

tharp@email.arizona.edu

 

Hal Tharp received his B.S., M.S., and Ph.D. degrees in Electrical Engineering.  Since 1987, he has been with the Electrical and Computer Engineering (ECE) Department at the University of Arizona where he is currently the Associate Department Head.  His main research interests are in the general area of control theory, with his primary focus being on applying effective control strategies to real-world systems.  He has worked on controlling large flexible space structures, optical disk drives, space-based calorimeters, instructional control systems, temperatures inside living tissue, gyroscopically stabilized platforms, metrology, and micro-electromechanical systems (MEMS).  His teaching activity has been extensive and includes courses in control, circuits, signals and systems, introductory engineering, power electronics, embedded systems, and senior design.  His teaching continues to incorporate technology when appropriate.  He has taught video classes for distance-based students, developed a web-based electric circuits course for non-ECE students, prepared short-course material for Professional Engineering training, incorporated video-material via the web to enhance his automatic controls lab component, integrates screen-capture material to supplement his conventional courses, and participated in an active learning project involving the three Arizona universities.  Dr. Tharp has also been very active in the assessment activities in the ECE Department.

What is the challenge that my project addresses?

The challenge in this project was to integrate an assessment process into the newly established B.S. Electrical and Computer Engineering degree program.  Additionally, we wanted to be able to effectively interpret the collected assessment data and utilize the assessment process to continuously improve the ECE program.

How did I pursue the challenge?

In addition to the help received through the FLC, I worked with the newly appointed Department Head to help simplify the existing assessment process in the ECE department.  The simplified assessment process amounted to requesting student data from each ECE faculty member teaching a core course in the ECE program.  The Student Outcomes or learning outcomes for the undergraduate program in Electrical and Computer Engineering are aligned with the Student Outcomes established by the ABET accrediting agency.  Each core course faculty member was asked to select one course activity for each student outcome that was assigned to their course.  The faculty member then provided the student scores for that course activity, the maximum possible score for that activity, and the score that meets the criteria.  Each of the eleven Student Outcomes was assigned to more than one core course.  Thus, the assessment process incorporated a level of redundancy in the coverage of the Student Outcomes.  The data were then processed relative to the scores meeting the criteria, with the percent of students meeting the criteria being computed.  A table, documenting these percentages, was constructed to provide a visual record of how effective the ECE program was meeting their assessment goals.  Program changes or improvements can now be taken based on the tabulated data.

What have I learned?

The assessment process should be as simple and straightforward as possible.  It should not represent a significant increase in the workload for any of the faculty.  Ideally, the assessment data can be collected from pre-existing graded material.  For our particular assessment of Student Outcomes, I created a pre-formatted spreadsheet for each faculty member teaching a core course.  The faculty member only needed to enter the student data for the few items being assessed in their course.

My advice for others pursuing similar assessment challenges is:

Don’t make the assessment process too complex or tedious.  Only require a few assessment measurements for each assessed course.  Standardize the process, so that each faculty member is being asked to do the same level of work.