Question 2


Question 2

 

2.2       Please respond to 2.2.a if this is the standard on which the unit is moving to the target level.

2.2. a   Standard on which the unit is moving to the target level.

  •   Describe areas of the standard at which the unit is currently performing at the target level.

The School of Education and the professional community regularly evaluate the capacity and effectiveness of its assessment system, which reflects the conceptual framework and incorporates candidate proficiencies outlined in professional and state standards. Annual retreats occur each May during which significant time is allotted for analysis of data and requirements of the candidate assessment points (CAPs) at both the initial and advanced levels. The professional community for the unit includes unit faculty, Arts and Science faculty, P-12 practitioners and candidates. P-12 practitioners include teachers and administrators in the region who serve as cooperating teachers for student teachers and practicum students. They also include teachers and administrators who evaluate portfolios and interview candidates during the CAP 4 exit program. Other members of the professional community are involved through various entities that meet regularly. The education faculty meet monthly, the Teacher Education Committee (TEC) meets monthly, and the Teacher Education Advisory Council (TEAC) meets annually. Meetings of the TEC and TEAC include consideration of the unit’s assessment system, as illustrated through agenda and minutes. Also, the special education program has several mentors who meet on campus regularly in association with the Kentucky Alternative Certification in Special Education (KACSE) committee. These entities and the education unit faculty regularly evaluate the capacity and effectiveness of its assessment system.

The continuous assessment system illustrates careful alignment with the unit’s conceptual framework and theme ‘Empowerment for Learning’. Components of the conceptual framework are aligned with the KTS which are interwoven throughout programs while the Empowerment Index demonstrates overall effect. Candidates in the introductory classes engage in several activities where they analyze the conceptual framework and its relationship to their program experiences. Also, great care is taken to ensure that state and national proficiencies are incorporated into every program (evidenced by the program submissions) and through the analysis of data by the KTS at annual retreats. Several indicators from select KTS also demonstrate diversity proficiencies which are summarized and analyzed as well. Many of the Praxis II preparation materials and professional development sessions used by faculty reflect the KTS and national standards.

The unit regularly examines the validity and utility of the data produced through assessments and makes modifications to keep abreast of changes in assessment technology and in professional standards. Annual retreats, each May, provide opportunities for the unit faculty to review/analyze data summaries from the previous year’s CAPs, to determine the data’s validity and utility and to make modifications in the system. Content validity is enhanced as programs are developed with matrices to illustrate alignment with national and state standards. Effort is taken to analyze and document the relationship between the assessment results and candidate performance. If, over time, the unit faculty identify a critical need to improve candidate performance, the ideas are discussed during retreats and changes to the assessment system are made accordingly. As the standards and or certifying Praxis II examinations change, the unit strives to ensure the programs are aligned with the appropriate changes. Program curriculum guide sheets are also updated to reflect any changes.

Decisions about candidate performance are based on multiple assessments made at multiple points before program completion and in practice after completion of programs. The continuous assessment system, at both the initial and advanced levels, involves multiple Candidate Assessment Points (CAPs). There are four CAPs at the initial level and three CAPs at the advanced level. Each CAP requires candidates to demonstrate proficiency on multiple assessments, reflecting both internal and external assessments. Internal assessments include, but are not limited to, grade point average, disposition assessments and portfolio evaluations at the initial level and action research projects and culminating projects at the advanced level. A portfolio plan developed for candidates seeking initial certification identifies critical course- embedded assessments that eventually factor into portfolios. Clinical experiences provide opportunities for candidates to demonstrate standards and other expectations during placements that span 16 weeks in the classroom. External assessments include CAP 4 portfolio evaluations and video evaluations, Praxis II examinations, cooperating teacher evaluations, performance during KTIP and New Teacher Surveys conducted by the Kentucky EPSB. Courses also include an array of assessments such as lesson plans, instructional units, examinations, research papers, journals, field experiences, assessment design and instructional web pages. Candidates in advanced programs develop action research projects (Teacher-Leader Masters in Education) or culminating performances (Rank 1).

Data show a strong relationship of performance assessments to candidate success throughout their program and later in classrooms or schools. All certification programs are aligned with state and national standards that are assessed throughout the respective curricula. Candidates must demonstrate through various performance tasks that they know their content and that they know how to teach it. They have numerous opportunities during class and field experiences to teach and receive feedback from practitioners and faculty members. They must demonstrate through the completion of critical reflection tasks (Task C, J1 and J2) that student learning has occurred (1.3.g). Candidates use multiple assessments to determine and improve student learning that informs their practice. Candidates must meet specific requirements at each CAP in order to progress through the educator preparation program and policies are in place to monitor candidate performance levels, thus illustrating the relationship of candidate performance and their success in the program. Data from KTIP pass rates and the New Teacher Survey further demonstrate their performance and success in the classroom.

The unit conducts thorough studies to establish fairness, accuracy and consistency of its assessment procedures and unit operations. It also makes changes in its practices consistent with the results of these studies. The unit ensures the fairness of its assessment procedures and unit operations through the program development and review process. Secondly, faculty members help candidates understand what is expected throughout the program and at the designated CAPs. During CAP data analysis at annual retreats, considerable discussion ensues regarding instructional validity and candidate experiences necessary to perform at high levels on proficiency standards.

Accuracy, or content validity, is regularly studied and/or reviewed as programs are updated to align with national and state standards and learning proficiencies. During retreats, there is comparison of results for major assessments to other internal and external assessments that also ensure concurrent validity. Consistency or reliability is considered and formally evaluated through inter-rater reliability studies of portfolio scoring. Previously, all of the education faculty have evaluated a common portfolio. Data were analyzed by SPSS to determine comparative scores. Initiatives to conduct studies of inter-reliability using LiveText are being utilized. The unit strives to ensure consistency by providing mentors for new faculty members, especially during the portfolio evaluation process.

The unit’s assessment system provides regular and comprehensive data on program quality, unit operations and candidate performance at each stage of its programs, extending into the first years of completers’ practice. The continuous assessment plan consists of four candidate assessment points (CAPs) at the undergraduate level and three CAPs at the graduate level. Data are collected through implementation of the CAPs each semester and then aggregated to determine unit and program quality. During annual retreats, all data are summarized and converted to graphs which are analyzed to determine program quality, unit operations and candidate performance; data are also disaggregated to determine unit and program quality on campus and at off campus sites and through online formats (retreat data summaries). The most recent graduate follow-up policy involves surveys distributed through LiveText, after the first and third years since graduation. Placement data and employer survey data are also collected through follow-up surveys.

Assessment data from candidates, graduates, faculty and other members of the professional community are based on multiple assessments from both internal and external sources that are systematically collected as candidates progress through programs. Internal sources of data collected at candidate assessment points (specifically CAPs 1-3 and 5-7) include various assessments, ranging from grade point average and disposition assessments to portfolio evaluations at the initial level and action research projects and culminating projects at the advanced level. A Portfolio Development Plan developed for candidates seeking initial certification identifies critical course-embedded assessments that eventually factor into portfolios; these have been added to corresponding courses in LiveText. Clinical experiences provide opportunities for candidates to demonstrate standards and other expectations during placements that span 16 weeks in the classroom. External assessments include CAP 4 portfolio and video evaluations, Praxis II examinations, cooperating teacher evaluations, performance on the KTIP and the New Teacher Survey conducted by the Kentucky EPSB. Courses also include an array of assessments such as lesson plans, instructional units, examinations, research papers, journals, field experiences, assessment design and instructional web pages. Candidates in advanced programs develop action research projects (Teacher-Leader Master of Arts in Education) or culminating performances (Rank 1). Both major assessments involve varied evaluators internally and externally.

Data are disaggregated by program when candidates are in alternative route, off-campus and distance learning programs. Data are aggregated to determine unit and program quality; they are disaggregated to determine unit and program quality on-campus and at off-campus sites and online formats. Currently the unit offers Interdisciplinary Early Childhood Education (IECE) programs on campus at Somerset, Elizabethtown and Louisville. The special education program is offered in blended format online (blended online format involves online courses that may require face-to-face meetings). Each May, when the annual data are presented for analysis, the data are both aggregated and disaggregated to determine program quality for all sites and in all formats.

Data that are regularly and systematically compiled, aggregated, summarized, analyzed, and reported publicly for the purpose of improving candidate performance, program quality and unit operations. Each semester, data generated through implementation of the continuous assessment system and the respective CAPs are entered into an ACCESS database and then summarized for analysis at faculty retreats each May. Annual analysis of data results in the identification of several growth areas included in a Program Improvement Plan (PIP) for each upcoming year. Aggregated and disaggregated data from the continuous assessment system are shared with respective faculty at summer faculty sessions for the IECE program and the special education program. Data are also regularly shared with TEC. Additionally, data are shared with the P-12 members of the TEAC which typically meets annually. During annual Dean’s meetings, data regarding candidate performance, including Praxis II reports, is presented and discussed to Music, Arts and Science deans/chairs.

The unit has a system for effectively maintaining records of formal candidate complaints and their resolution. The unit has two operational complaint policies that are addressed by either the Dean or the Associate Dean of the School of Education, who maintain confidential files of such issues. One policy is a general appeals policy and the other regards dispositions. The university also has a general appeals policy for handling issues at that level. The policies are available in the evidence for this standard (2.3.c). In addition to these, candidates who are student teaching have due process regarding their performance.

The unit is developing and testing different information technologies to improve its assessment system. The unit data entry specialist enters data from the various assessment instruments associated with CAPs 1-7 into Microsoft ACCESS database, which provides for collection, summarization and analysis of data at annual retreats. Additionally, the unit implemented the use of electronic portfolios using LiveText during the fall of 2011. The implementation plan involved CAP 3 portfolios in the fall of 2011 and CAP 4 portfolios in the spring of 2012. Further implementation into the Teacher Leader Masters in Education and the Special Education programs is planned for fall of 2012.

The unit has fully developed evaluations and continuously searches for stronger relationships in evaluations, revising both the data systems and analytic techniques as necessary. The CAP tables in the continuous assessment plan delineate the various evaluations required at monitoring checkpoints. The portfolio plans for the regular program and the initial special education program at the graduate level identify critical course based assessments that will eventually factor into the respective portfolios. The unit has adopted a series of assessment tasks that mirror those of KTIP. The KTIP assessments have been adapted to better meet the requirements of courses in the preparation program. Each May the data system is analyzed based on data collected and summarized, in addition to revisions in the CAP forms.

The unit not only makes changes based on the data, but also systematically studies the effects of any changes to assure that programs are strengthened without adverse consequences. During annual retreats a major outcome of data analyses is identification of several growth areas for the upcoming year based on KTS, disposition data or other data that revealed areas of most need. The resulting Program Improvement Plan (PIP) includes action steps that the unit plans to take to strengthen the identified growth areas. The PIP is regularly an agenda item for faculty meetings. For example, when the data revealed lower candidate performance on KTS 7, reflection, that standard became a major focus the next year. Focus centered on appropriate changes in KTIP tasks, identification of reflection questions for field hours, and PPD seminars with required attendance. Thus far, the changes have had positive effects on programs and resulting data. Through conversation at retreats and throughout the year, unit faculty are able to determine if any changes have adverse effects and take action to divert or correct.

Candidates and faculty review data on their performance regularly and develop plans for improvement based on the data. Candidates are required to complete several self-assessments throughout the continuous assessment system. When they teach lessons and units, they are required to reflect on their instruction and student learning through Tasks C, J1 and J2, which are reflections on instruction. As candidates matriculate through the CAPs, they receive feedback from interviews, letters from the dean or associate dean informing them of their status and continuous feedback from their advisors and faculty members for their courses. Candidates also develop Pre-Professional Development Plans (PPGP) which involves identification of their strengths and growth areas regarding the KTS and dispositions. The task associated with this plan requires development of action steps for the candidates to improve the identified growth areas. Faculty receive feedback from course evaluations through the Academic Dean’s office. In addition, faculty in the unit are required to develop professional growth plans which are implemented in the fall of each year based on self-assessments and perceived needs.

Since the last accreditation visit, the assessment committee consisting of several education faculty members and the data entry specialist meets regularly each semester to analyze the unit’s performance on target level indicators. Additionally, the annual retreats have provided major opportunities for the unit to determine the impact of the unit’s assessment plan on candidate performance and program quality to lead to target level performance. Numerous opportunities exist for the professional community to be involved in the CAP system, such as during the CAP 4 exit program. The standing committees, TEAC, TEC and KACSE provide other opportunities for evaluation of activities and their impact on candidate performance and program quality at the target level.

Respective NCATE committees continue to meet at least two or three times each semester to analyze how well the unit is meeting target level of Standard 2. Target indicators are reviewed which guide the development and implementation of the annual action plans presented each year during faculty meetings. These often result in proposed changes to the assessment system or other facets of unit operations. Additionally, the unit continues to seek feedback from members of the professional community, through involvement in the various CAP evaluation events, TEAC and TEC, these entities review the assessment system. The annual retreats will continue to provide analysis of data to sustain target level performance.