What is Distance Learning

Автор работы: Пользователь скрыл имя, 25 Ноября 2013 в 19:56, курсовая работа

Описание работы

Introduction to Distance Learning
The History of Distance Learning
California Considerations (with particular interest to California adult schools)
Distance Learning Design
Planning and Administration
Distance Learning Evaluation
Distance Learning Online

Файлы: 1 файл

курс.docx

— 307.35 Кб (Скачать файл)

Persons deeply interested in instructional design trends and thinking can access Learning Technology, an IEEE publication.

Distance Learning Planning and Administration

This section applies to most adult basic education programs and specific application for California adult educators, especially adult schools. It discusses the design, administration and assessment of distance learning.

Planning

Your adult education program should have a technology plan in place. It provides the guidelines to set priorities. California has developed a good model and an eight step process to create a technology plan.

The CDLP recommends that adult education programs routinely survey their learners regarding their access to and interest in computers and the Internet. Determining learner access to video tape and DVD players should be documented. This information will help plan your intervention strategy.

Including distance learning in your instructional strategies assumes there is a need. A learner centered needs assessment addressing the demand for distance learning, identifying potential learners, their learning styles, and delivery system alternatives should be the starting point. The goal is to define the most effective and cost effective methods to serve the targeted learner within your technical capabilities.

An overall distance learning goal is to serve learners who cannot or will not attend traditional classroom courses. Motivation to participate (readiness to learn) has been used as a surrogate for self directedness. However, screening should occur to try to determine whether the prospective learner can work comfortably in a self directed environment. This is easier said than done, given the lack of predictive tools. However, it is important that instructors or facilitators involve learners in diagnosing their own learning needs and identifying their learning objectives at enrollment.

Stephen Brookfield's work in self directed learning, critical reflection, and experiential learning sets the conceptual framework for defining the context for independent learning. [See for example Stephen D. Brookfield, Developing Critical Thinkers, Jossey-Bass, San Francisco, 1987.] He notes that self-directed learning requires that adults take control of their learning, setting their own learning goals, and determining which learning methods to use. The cross-cultural dimension must be taken into account in promoting self directedness as should the adult's previous experiences. Critical reflection incorporates learning where adults can reflect on their self-images and change their self-concepts. Challenging previously held beliefs, values, and behaviors are important to this reflection.

It is important that the distance learning provider provide concise statements of expected learning outcomes. These outcomes should guide the instructional strategies, technology, and intervention methods. Building a method to utilize assessment and feedback to learners and instructors should be a part of the design.

The California Distance Learning Project recommends that adult education programs initially experiment with a small video checkout program to see whether distance learning is a useful intervention. This test can be designed as a hybrid program with an existing classroom based course. A small group of students can be enrolled in the video checkout to extend and speed their learning activities. The adult education administrator then can determine the utility of expanding the distance learning program, based on more concrete experience and information.

Using the video checkout test permits the coordinator to have face to face contact with learners when they check in videos and have their work reviewed. This experience will lead to making subtle adjustments in the program design and developing a better feel for learning requirements in a distance learning context.

Distance learning utilizing instructional technologies should be incorporated into the overall technology planning. Special attention to staff development and support is important.

Administering Distance Learning

A distance learning program normally has an assigned coordinator. The coordinator's responsibilities include:

  1. Needs assessment and learner identification
  2. Program approval
  3. Marketing and promotion
  4. Outreach and recruitment
  5. Coordination with classroom programs
  6. Assessment and enrollment
  7. Testing and progress monitoring
  8. Learning materials inventory
  9. Instructor supervision
  10. Managing and using student and program information
  11. Program evaluation and improvement

The areas where programs appear to have their most difficulty are assessing the learner's ability to learn in a self directed context and providing individualized assistance. Screening and counseling should occur during the distance learning enrollment process.

Providing individualized assistance will vary dramatically according to the type of distance learning intervention. Instructor - learner contact is necessary. How and how often it is provided varies. If regular face to face contact is impractical, telephone contact or written contact should be used and documented. Email and chat activities are important in Internet provided instruction.

Record Keeping

California adult schools are required to maintain Tracking of Programs and Students (TOPSpro) data on all enrolled learners. Learners enrolled only in distance learning can be identified by checking the distance learning box on the Special Programs section of the TOPSpro Entry Record. This provides demographic and programmatic information on each learner.

Additional learner progress information normally is maintained in an individual portfolio or file. The content is based on the type of distance learning program. This information is invaluable in working with the individual learner and monitoring her or his progress.

Determining ADA

California adult school non traditional learning is subject to the seat time accounting practices (average daily attendance). In making an annual Innovation Program application, the adult school describes how instruction is cross referenced with ADA. Several models have been developed and followed by many adult schools. The Los Angeles Unified School District's model is listed on the California Distance Learning Project Web site under Research. Other examples are available from the individual Innovation Program applications. They are online at the Adult Education Office Web site (AEO). Click on "See other participating adult schools" and pick a program.

Accountability

Accountability has two elements in this discussion —

  • collecting demographic and program participation data on the learner, and
  • collecting standardized pre and post test data on the learner.

Accountability with California adult school distance learning programs is based on the use of TOPSpro to maintain common data on learners and programs. Pre and post testing using the appropriate CASAS reading and listening tests for ESL, ABE, and GED/adult secondary education learners are required when federal funds are being used. Other criterion referenced assessment instruments should be used as appropriate for the authorized program area of instruction.

Collecting progress testing information on each distance learning enrollee presents special problems, especially when the learner is enrolled in a telecourse. At the least a valid random sample of learners would be used to limit the data collection burden.

This standardized data collection is relatively new, and many local programs do not use the data effectively to examine their program performance and make useful adjustments. In the future it is likely that more standardized learning outcomes or performance based information will be required. It is prudent to become comfortable with using these readily available data in making reports to one's various local stakeholders like the instructional staff, adult education principal or administrator, and the district superintendent.

Other more idiosyncratic tools like authentic assessment, portfolios, and records of learner progress should supplement the required data collection. Asynchronous learning is self paced and individualized. It is important to maintain a good record of learner activities and progress that goes beyond the standardized record keeping and testing. Experience shows that teachers collect more detailed information on distance learner's progress than the classroom learners. These teachers often have closer, more regular, and more personal student contact.

Monitoring and Evaluation

Monitoring and evaluation apply equally to the distance learning program and to the individual participant. Emphasizing the learner's self evaluation and reflection should be integral to progress monitoring activities. Outcome and achievement measures that reflect curriculum mastery cannot be overlooked in spite of our emphasis on standardized testing.

Portfolio Content

Experience has shown that distance learning programs often have more evaluation of learner progress than do the classroom programs. The following are suggestions for portfolio content. They were developed by the Hacienda La Puente distance learning program.

  • ID number on registration database
  • Progress Log to record student profile, attendance, lessons & goal achievement.
  • Registration copy
  • CASAS pre/post test results
  • Work Samples from first 9 hours of instruction to document entry level.
  • Anecdotes written by students with examples value-added goal achievement.
  • Copies of important correspondence between staff and students.
  • Certificates of Achievement for excellent attendance or program completion. (To be given to students at next meeting with staff member)
  • Surveys to document other student gains not covered by CASAS. (To be given to students at next meeting with staff member)
  • Homework -- Corrected writing samples ready to be returned. All other work corrected with immediate student feedback at weekly tutoring appointments.
  • For Telecourses -- Additional work samples to document progress and copies of all returned-by-mail tests.

Because many educators and policy makers are still skeptical about distance learning, there should be a strong emphasis on documenting mastery as well as user satisfaction on learning services received.

Independent Study

In California independent study refers to a program that permits students to take high school subjects through individualized learning. The model calls for the student and teacher to create a quasi informal contract where both parties agree to certain outcomes and activities. Regular meetings to review progress and assignments are central to the agreement.

Independent study is very similar to distance learning but for the most part has not relied on instructional media to deliver the instruction. Adult secondary education can be delivered via distance learning. It is not subject to the Innovation Program limitations. However, the normal procedures and documentation that apply to the independent study programs apply equally to the distance learning Innovation Programs.

Conceptually, the model of a good independent study program is an equally good model for distance learning. It should not be trivialized by minimizing the role of the instructor.

  • an individual agreement defines the roles and responsibilities of the learner and the instructor;
  • regular communications between the learner and instructor are defined, and procedures to review progress are defined,
  • learner progress is documented using standardized and alternative assessment tools
  • other expectations for each party are articulated.

The curriculum design must be based on the approved course outline.

Design Issues

Design issues change over time and vary according to the person's involvement in distance learning.

  • How do you screen for learner interest in distance learning?
  • Does your technology plan include orienting and training teachers in using distance learning resources?
  • What procedures do you use to assign learners to distance learning and at what levels?
  • How can you screen for self directedness?
  • How can pre and post testing best be integrated into your distance learning?
  • How do you start a process to acquaint teachers with online instruction?
  • How can you use TOPSpro data in continuous program improvement?

Conclusion

Evaluation is undervalued and underutilized in adult education. Outcomes based learning will drive adult education in the future...yet most programs cannot effectively document program and learner outcomes and strengths. The expansion of distance learning as an accepted modality will be tied to our ability to document outcomes and, when necessary, compare them with classroom centered learning. Be sure to review the section on evaluating distance learning. 

 

Evaluating Distance Learning

Generally, evaluation is used to determine the degree to which program objectives are met through the procedures used by the program. The evaluation determines whether or not the outcomes or results predicted by the program occurred and if their occurrence was due to the project.

Unfortunately program evaluation is often viewed as an isolated activity that administratively functions apart and separate from the actual project or program. It should be part of the overall administrative process with the purpose of answering pragmatic questions of decision makers who want to know whether to continue a program, extend it to other sites, or to modify it. If the program is found to be only partly effective in achieving its goals, the evaluation research is expected to identify the aspects that have been unsuccessful and recommend the kinds of changes that are needed.

It is essential that evaluation and feedback be part of all distance learning programs. In most instances the evaluation will include learner and program performance information. The learner performance will be based on standardized and curriculum mastery measures.

Many adult education programs are remiss in utilizing even informal program evaluation techniques to monitor and modify their programs. While most administrators intuitively know how their programs are working, they lack the systematic data to verify their instincts.

This section is quite detailed and may provide more detail on evaluation than is generally needed. The annual Innovation Program reports cited in the next section provide a good model of how to use data in describing and defining overall program performance and outcomes.

How Effective Is Distance Learning?

The Distance Education Clearinghouse provides a listing of distance learning evaluation studies and topics. Possibly the best known effort is the work of Thomas L. Russell, who has examined research studies going back to 1928. This research shows that the is "no significant difference" between distance and classroom instruction.

Descriptive statewide California data on ESL, ABE, and adult secondary education / GED learners participating in adult school distance learning programs are quite positive. These annual reviews are reported at the California Distance Learning Project Web site (Innovation Program Reports). While the pre-post testing data are not representative, they show that the Innovation Programs for the most part perform better than historical norms. The 2003 – 2005 report concludes that "...When comparing classroom data with the Innovation Programs, it is clear that the distance learning programs are particularly successful in providing ESL learning opportunities. Local research data on student persistence and retention support these findings.

The Innovation Programs meet the three crucial benefit–cost criteria necessary to be accepted by adult education providers and the California Department of Education. These programs are effective, efficient, and equitable. This is the fifth year that these summary conclusions have been supported. They indicate the continued success of the Innovation Program initiative."

The data used in the reports provide a general guide for describing program participation and outcomes.

Evaluation Stages

Evaluations are conducted in two stages:

Formative the formative evaluation is conducted during the project implementation. The purpose is to determine the level and efficiency of the project activities and to identify problems that need correction. The formative evaluation informs the project administrators of successes and problems in the project to date so that corrections can be made. It contributes to the improvement of the project. Methods include data collection, documentation, site visits, interviews, focus groups, program viewing, student and teacher observation, as well as other methods that may be developed based upon the project.

Summative the summative evaluation is performed at the end of the project and refers to the impact of the project on students, staff or elements of the program addressed in the project’s objectives. The purpose is to assess the overall success and impact of the project, measuring learner achievement, and how well the project objectives were met. Summative approaches are concerned primarily with measuring a project’s predicted outcomes in an effort to determine whether or not the program or project intervention produced an independent effect or impact on the predicted outcomes. The evaluation report guides project decisions at the end of the project to modify, expand, replicate or discontinue the program, as well as to inform others who may conduct similar programs.

Summative evaluation most commonly uses quantitative product, or outcome indicators, and data sources, such as performance, knowledge assessment instruments, portfolio assessment, or structured interviews.

Formative and summative evaluations are not considered separately. The formative evaluation contributes to and forms the summative evaluation. There are basically two ways to conduct summative evaluations; criterion-based studies and comparison studies.

The criterion-based evaluation design determines how well the project met the predicted objectives. The objectives specified in the project proposal are used as the standard to determine effectiveness. Usually, this would be the performance of the participants which indicates impact (improved test scores, ability to demonstrate a new skill, etc.). The conditions of performance and the level of proficiency are also noted and measured. Now you can develop instruments that will measure performance against the standard, or criterion, established from the project's goals and objectives.

Comparison studies determine if one program is more effective than another. One could compare a regular program with a pilot program to contrast the outcomes of the regular program with the pilot (experimental) program. If the experimental program produces the desired effect, this design will show that the project rather than another variable produced the outcome. Comparison, or norm groups, are students who are matched to the target group (by random selection or on some predetermined list of attributes and characteristics), and are pre-tested on the same measures, but are excluded, from the intervention activities.

Comparison groups that have been matched are called control groups because theoretically they control for other variables that might account for differences in performance between the groups. Evaluations using comparison groups are usually considered more valuable in determining whether or not the project would be successful for adoption or adaptation by others.

A time-sample design would provide for continuous and periodic collection of student work over time. Analysis of performance on this work looks for trends or patterns of students change that could be inferred to result from the project intervention.

Authentic or Alternative Assessment: One trend in education is to use assessment procedures that do not rely on standardized tests. The advantage is that alternative assessment can provide a more authentic description of the area being measured. These tend to be more "qualitative" and include "portfolio assessment" or the collection of work samples that can be analyzed against a set of predetermined criteria. Educational technology projects provide an opportunity to develop and use alternative assessment techniques.

The evaluation design that will usually provide the most information is the pre-post comparison group design. The post-test only design does not control for preexisting conditions, or variations, in performance or knowledge. The control group design produces credible results that in the past have convinced skeptics of the worth of many programs. Unfortunately, the use of control groups is often impossible for technology based projects.

Qualitative approaches exist that may work which do not require control groups. Two of these models include the following:

Most projects use the criterion-based design for groups or individuals, or the pre-post test design. This design lacks the control that separates extraneous variables making it difficult to attribute the desired or predicted outcome to the intervention. However, the criterion-based design does make it possible to assess the degree to which the predicted outcomes were attained. In a pre-post test design, the test norms, in effect, become the criterion. This design is much more useful if there is some external standard available (such as national or state norm scores) to be used as the standard for comparison.

Summative evaluations tend to use quantitative measures such as standardized or criterion-referenced tests. Qualitative techniques can be used as well, such as portfolio assessment.

The strengths of quantitative and experimental designs are that:

  • When appropriate, this model minimizes evaluator bias by defining data collection and analysis procedures simply and concretely; and
  • procedures lend themselves well to replication and cross-comparison with other program locations.

Weaknesses of quantitative and experimental designs are that:

  • It is easy to get misled in data analysis and falsely assume that the program is causing the outcomes, when in fact, those outcomes are really being caused by unidentified intervening factors and that the results can be generalized to a population when in fact the students studied, surveyed, or tested do not represent a random sample of that populations; and
  • the evaluation design is so structured and rigid that valuable, but unanticipated outcomes may be missed because those outcomes have not been expressed as variables.

Информация о работе What is Distance Learning