Data Collection and Analysis Tools

Starting in 2021, EP will use AEFIS to collect and house assessment data. The goal is to collect data and align key assessments with program learning outcomes in AEFIS. This tool provides the ability to aggregate data across EP, programs, and students.

The AEFIS assessment system, in addition to the assessment team, is supported by EP’s instructional technologists and AEFIS organization support. EP assessment system is also integrated into all existing JHU systems and Central JHU IT supports maintenance and upgrades of the integration. Furthermore, the instructional design team supports course design and integration of assignments, assessments, surveys, and reports between Blackboard, AEFIS, and other systems.

EP Annual Report

Program leadership is expected to collect and analyze learning assessment as outlined in their learning assessment plan. Annual reports are used to make course and program level changes and improvements.

The EP assessment team will analyze all results to identify division level patterns and trends related to strengths and needs to support recommendations for changes at the program and divisional levels.

Assessment Types

Assessments include measures that are direct and indirect, course embedded and stand-alone, and formative and summative. Direct assessments measure learners’ performance on a specific assignment, standardized tests, and observations. Indirect assessments measure learners’ values, perceptions, attitudes, behaviors, and beliefs through course evaluations, surveys, meeting notes, reflections, and feedback.

Course embedded measures are assessments within required courses that expose learners to systematic learning experiences designed to prepare graduates by providing them with the specific knowledge and abilities to address identified learning outcomes. These include test questions or assignments that are often discipline specific.

Formative assessments provide feedback and guidance to candidates as they progress in the course. Summative assessments capture performance at key points and are measured against expected levels of progress at each of these points.

Interpretation of multiple measures from all candidate level assessments guide improvements in courses, programs, and unit operations.

Additional details on each part of the assessment plan are available in the Assessment Plan Components section.

Direct and Indirect Measures

Direct Measures

Direct assessment measures are used to evaluate candidates’ knowledge, skills, and behaviors that reflect achievement of the program stated goals and objectives.

Student Learning Outcomes
  • Tests, exams, standardized testing, national and state tests
  • Review by external and internal examiners
  • Oral and Comprehensive exams
  • Papers
  • Portfolios
  • Behavioral observations
Operational Outcomes
  • Retention rate
  • Graduation rate
  • Degree attainment
  • Materials and equipment
  • Cost per student
  • Faculty qualifications
  • Faculty productivity
  • Audits by external evaluator

Indirect Measures

Indirect assessment measures are conducted to seek candidates’ perceptions to determine if a goal or objective has been achieved.

Student Learning Outcomes
  • Surveys and focus groups to seek:
    • Candidate perception
    • Alumni perception
    • Employer perception of program impact
  • Admission and exit interviews
  • Grades in courses
  • Candidate records
Operational Outcomes
  • Surveys, interviews, and focus groups to seek:
    • Stakeholders (student, administration, staff, faculty, employers) perception about operations
Course-Based Assessment

Course-based assessments are assessments of knowledge, skills, and behaviors in courses within a program. Course-based assessment should align with each program’s Program Learning Outcomes (PLOs).

Assessment Plan Components

WHAT: Articulate statements that define the knowledge and skills a student is expected to have learned by the completion of the program. These statements are known as learning outcomes.

Learning outcomes state what students are expected to know or can do when they’ve completed the program. Learning outcomes should be clear and measurable.

Start with an action verb that denotes the level of learning expected. Levels of learning and associated verbs may include the following:

  • Remembering and understanding: recall, identify, label, illustrate, summarize.
  • Applying and analyzing: use, differentiate, organize, integrate, apply, solve, analyze.
  • Evaluating and creating: monitor, test, judge, produce, revise, compose.

Follow the verb with a statement describing the knowledge and abilities to be demonstrated.

HINT: Terms such as know, understand, learn, and appreciate are generally not specific enough to be measurable.

Examples:

Learning outcomes for graduate Systems Engineering. Graduates will be able to:

  1. Apply technical knowledge in mathematics, science, and engineering to lead the realization and evaluation of complex systems and systems of systems.
  2. Demonstrate the ability to conceive of, gather user needs and requirements, design, develop, integrate, and test complex systems by employing systems engineering thinking and processes within required operational and acquisition system environments.
  3. Understand and utilize the life cycle stages of systems development from concept development through manufacturing and operational maintenance.
  4. Exercise their responsibilities in the management of cost-­‐effective systems product development by leading and participating in interdisciplinary teams.
  5. Capable of communicating complex concepts and methods in spoken and written format.
  6. Demonstrate awareness and capability in employing tools and techniques in the systems engineering process.

WHERE: A specification or mapping of where the learning takes place in the curriculum (a curricular map). Curriculum mapping provides an effective strategy for articulating, aligning, and integrating learning outcomes across a sequence of courses or other learning experiences.

This type of mapping analysis illustrates how the curriculum is being used, and this activity can in itself be helpful first step in program improvement. It can highlight courses that are important to multiple learning outcomes and it can help identify courses that may not be contributing to the program.

HOW: Develop assessment strategies and measures (direct/indirect) to determine how students are meeting expectations for learning.

  1. HOW: One direct measure of how well the students in the program, collectively, meets one or more of the programs the learning outcomes. This is the one activity specific to the academic activity and undertaken by program faculty and staff.
Examples of direct assessment:
  • Capstone experiences. Faculty collect examples of student work from the capstone course. A group of faculty evaluates the work using explicit criteria, reviews the evaluation against program standards to identify areas of program strength (to be protected and amplified) or weakness (potential for improvement). Such an assessment uses a rubric, or scoring sheet, that is designed for the purpose and sets standards for the program.
  • Research dissertation. For research—based graduate programs, faculty create a rubric for evaluating performance on the prelim and/or the dissertation defense. These rubrics include neither the faculty nor the student name. Periodically (annually or once every two or three years) the rubrics are reviewed to identify areas of program strength (to be protected and amplified) or weakness (potential for improvement).
  • Standardized tests. A standardized test is given to all program students near graduation. A standard is set for the program of a percent of seniors who will achieve specified levels of achievement, for example, 95% pass rate and 50% high achievement rate. If the standard is not met or exceeded, the program can review details of testing to identify areas for improvement.
  • Embedded tests/assignments. Embedded tests can be applied in many contexts, and are often ideal to assess skill acquisition. The same test is asked of all students routinely in the course of other testing. That is, the assessment is embedded within a course test or final exam. Students are graded individually, and an evaluation of how well students do collectively is also evaluated.  For example: in a microbiology program an embedded test might be a practicum test of sterile technique; in a computer science program an embedded test might be the ability to solve a specific foundational coding problem; and in an art history program a student may be asked to write an analysis/critique of a piece of art. The embedded test is given to all students, and the results are reviewed at the end of the year (or on a two or three year cycle) without student or instructor names.
    • HOW: One indirect measure of how well students in the program, collectively, are meeting expectations. Indirect measures use surveys, course evaluations, or other approaches that reveal students’ perceptions of their learning. For indirect measures, there are EP resources that program faculty and staff can use without having to implement their survey.
Examples of effective indirect assessment:
  • Course evaluations. Course evaluations can align course objectives to program learning outcomes if questions about student learning are included and if the courses have been mapped to the learning outcomes.
  • Student surveys. Pen/paper or online survey of graduating students aimed at gaining information about their perceptions of learning. The following three questions are sufficient for a very basic
  • Alumni surveys. These could use similar questions to the graduating student survey.
  • Employer surveys. These are a useful approach if the employer base is sufficiently well identified but often the employer base is too dispersed to be useful.  A more useful approach is an annual meeting and structured discussion with an advisory board comprised of employers or potential employers of graduates.

SO WHAT. Review the assessment activity findings (evidence). Are students meeting expectations? Validate or consider ways to improve.

  1. Conduct an annual program faculty meeting that will discuss the evaluative evidence, analyze what it means for the program, and define any next steps.
  2. Prepare an annual report that will be used as a reference for successive years’ assessment activities.