After talking to some other philosophy PhDs who have successfully pivoted to the world of adult learning, I'm currently exploring jobs in instructional design, learning and development, and software training. For each of these titles, three frameworks for constructing and evaluating adult learning materials consistently pop up: ADDIE, SAM, and the Kirkpatrick Model.
As someone who has 7+ years of experience in adult education through my undergraduate teaching and public-facing work, I'm curious to see how these models stack up. How precise are they? What are the strengths and drawbacks? Are they practically useful?
ADDIE (Source: What is ADDIE?)
ADDIE is a linear model for course development and evaluation, and the loop can be repeated multiple times.
Analyze - Identify learning needs and objectives, and understand the training's context.
Design - Outline key learning objectives, course structure, and teaching strategies.
Develop - Create training materials, assessments, and learning activities.
Implement - Put the training into action.
Evaluate - Use qualitative and quantitative feedback to determine efficacy.
My first reaction to this is that it's a less effective user experience design process. UX designers go through multiple rounds of feedback to improve the design before it's fully implemented. Although this might take more time on the front end, it can save the long and costly process of having to re-construct an ineffective training.
Second, as a former instructor, I've almost never followed all of these steps exactly in order. I tended to start with a general course idea, and then sometimes I'd think of a cool assignment, or a great reading to include, and then suddenly I would be filling out the syllabus before I'd even finished the basic course description or learning objectives. Through this process, I would also often converse with colleagues to share ideas.
I also made it a point in all my classes to take feedback at several points in the course: early on, towards the middle of the semester, and at the end of the class. For example, I implemented an optional anonymous feedback assignment where students could provide feedback as they chose. Not all of the feedback was helpful, but I was able to make targeted changes to my courses in response to clear, constructive suggestions.
Third, I think the main pros of ADDIE are that it emphasizes understanding the context of the training and provides a stepwise framework for instructors who need a clear, set process. I am always in favor of understanding the context—I wrote a whole dissertation on understanding responsibility and character in context! The same absolutely goes for learning. It's also always important to be able to articulate your process to yourself and have a shared understanding of how a process involving multiple stakeholders will proceed. For individuals and institutions who require a stable and consistent process, ADDIE can provide that, even if it's at the cost of flexibility.
Fourth, and this is a nit-picky philosopher's point, designing and developing aren't fundamentally different—the main difference that ADDIE seems to be getting at is that you should start by creating the big picture ideas and then fill out the big picture ideas at the level of details. If you're like me, however, the big picture and the little details are always in conversation with each other.
Overall reaction: Meh.
SAM (Source: The SAM Approach)
SAM was developed in response to the inflexibility and extended timelines of ADDIE and stands for "Successive Approximation Model." It has three stages:
Prep - Identifying learning styles, current knowledge, and skills to develop, and brainstorming initial ideas.
Iterative Design - Setting the project timeline, developing initial design ideas, and getting feedback.
Development - Creating design proof in three phases: Alpha (complete), Beta (a few fixes), and Gold (perfect).
Unlike ADDIE, SAM emphasizes the use of small steps, stakeholder feedback throughout the process, and incremental improvement. Getting something on the page design-wise is better than trying to produce something more fully worked out from the start, and the Alpha, Beta, and Gold stages allow for user feedback in the rollout stages.
Initially, I like this model a lot more than ADDIE. It builds in collaborating, gathering useful feedback, and finding productive responses to failure. Working closely with other people to find creative solutions to educational problems can be a lot of fun.
At the same time, there's something to be said for the slow, methodical way of thinking involved in ADDIE. If you're thinking about how to communicate complex information that's layered in multiple stages of concept-building and retention, rapid prototyping might not be able to handle the kind of systematic educational planning required. It might actually be easier to have one or two designers focusing on a curricula at the big picture design stages rather than having all sorts of chefs in the kitchen at each stage.
Ideally, I think I would prefer something between ADDIE and SAM (though closer to SAM) where there are sequential handoffs between fast and slow steps and between designers and stakeholders. This could allow for more big picture level thinking from designers while still maintaining multiple rounds of feedback and prototypes before launch. At some point, the expertise that educators have used in designing courses should be fully utilized without constant interference. This might actually speed up the process in the end, by starting with better solutions that require less iterative improvement.
Overall reaction: Good, but could be better.
The Kirkpatrick Model (Source: What is the Kirkpatrick Model?)
Unlike ADDIE and SAM, which provide steps for development and evaluation, the Kirkpatrick Model is a method for evaluating the results of a curricula or training program, in four levels:
Level Four: Results - Determining if the training improved relevant organizational outcomes.
Level Three: Behavior - Checking to see if workers are actually applying their training on the job.
Level Two: Learning - Finding out if participants gained the relevant skills, knowledge, confidence, etc.
Level One: Reaction - Asking if learners found the training relevant to their jobs, engaging, and helpful.
One really cool thing about the Kirkpatrick Model is that it focuses almost entirely on how the training has impacted the learners. Sometimes, in undergraduate education, course construction is much more about what is interesting to the instructor and what teaching style they've developed over time. While there is room for surveys mid and post course, there's no real possibility for checking student behavioral changes or seeing if, say, your ethics course has actually made the world a little bit more ethical.
I also think it's good that only the first level of evaluation is up to the learner. Speaking as someone who has received a lot of different undergraduate reviews for my courses, the feedback is often vague, contradictory, and generally unhelpful. I actually changed my survey questions to try to get more meaningful responses and specific suggestions, which did marginally improve the quality of feedback I received. I want learners to appreciate the course and find it relevant, but they aren't always the best reviewers—I've received more useful feedback from other educators sitting in on my courses.
One final thing I appreciate is that while metrics are important at several levels (KPIs for Level Four, Test Scores for Level Two, Survey Ratings for Level One), there is room for a holistic qualitative evaluation of the training, which may lead to better informed solutions for improving the training in the future. While taking the time to study learning outcomes at each level is potentially costly, it may speed up development of future projects.
Overall reaction: This is fun.
Have you developed trainings or courses for adults? What processes and methods do you prefer?
Photo Credit: Salvatore Ventura
Kommentarer