Every college student registers for classes, hoping for academic success. However, college study can be challenging, even for those students who often get As and Bs in elementary and secondary schools (Macalester University, n.d.). Research tells us that lack of time management skills, life challenges that are out of students’ control, content challenges, and not knowing how to learn are among top factors contributing to academic failure in college. (Fetzner, 2013; Texas A&M Today, 2017, Perez, 2019) In this blog, we will examine the importance of teaching college students time management skills, and how we should teach them those skills.

Why should we teach college students time management skills? 

Fetzner (2013) reported top 10 ranked reasons students drop courses in college, after surveying over 400 students who dropped at least one online course:

  1. 7% – I got behind and it was too hard to catch up.
  2. 2% – I had personal problems (health, job, child care).
  3. 7% – I couldn’t handle combined study plus work or family responsibilities.
  4. 3% – I didn’t like the online format.
  5. 3% – I didn’t like the instructor’s teaching style.
  6. 8% – I experienced too many technical difficulties.
  7. 2% – The course was taking too much time.
  8. 0% – I lacked motivation.
  9. 3% – I signed up for too many courses and had to cut down on my course load.
  10. 0% – The course was too difficult.

Student services staff at Oregon State University Ecampus also confirm, based on their daily interactions with online students, that many college students lack time management skills (Perez, 2019). Now that we have realized that many college students lack sufficient time management skills, do we leave it for students to struggle and learn it on their own? Or is there anything we can do to help students develop time management skills so they thrive throughout their college courses? And who can help?

Who can help?

Many higher education professionals, including instructors, instructional designers, advisors, student success coaches, and administrators can help students develop time management skills. For example, at New Student Orientation, there could be a module on time management. Perez (2019) raised a good point that usually New Student Orientation already has too much information to cover, there will be very little room for thorough/sufficient time management training, even though we know it is an area that many of our students need improvement. Advisors can help students with time management skills. Unfortunately, with the current advisor/college students ratio and 15 minutes per student consultation time, that is very unlikely to happen either. Last but not least, instructors can help students with time management skills in every course they teach. If instructors are busy, instructional designers can help with templates or pre-made assignments to give students opportunity to practice time management skills.

How can instructors teach students time management skills?

How could instructors and instructional designers help students from falling behind? A couple crucial solutions are teaching students time management skills and giving students opportunities to plan time for readings, quizzes, writing original discussion posts, responding in discussion forums, working on assignments, homework problems, papers, and projects. Regarding self-hep materials for time-management skill, there are abundant resources on how students could improve time-management skills on their own. Apps and computer programs can help us manage time better. Sabrina Collier (2018) recommended over ten time management apps, including myHomework Student Planner, Trello, Evernote, Pomodoro apps, StayFocused, Remember the Milk, and more.

I personally use outlook calendar, google calendar, and word document to create my personalized study at the beginning of a new term. Rice University’s Center for Teaching Excellence provides an online tool for course workload estimation that is worth checking out. Read-O-Meter by Niram.org will estimate reading time for you if you copy and paste the text into text input window.  In Canvas Learning Management System, to help students plan their total study time needed, instructors could help students visually and visibly notice time needed for study, by stating estimated time for each and all learning activities, such as estimated reading time, video length, estimated homework time, etc. The following is an example Dr. Meta Landys used in her BI 319 online course.

Task list for students with the estimated time to complete each item

Image 1: Task Time Estimate and Visual Calendar of the Week in BI 319 “Critical Thinking and Communication In the Life Sciences” online with Instructor Dr. Meta Landys.

At program and institutional levels, keeping important dates visible to students will also help students stay on top of their schedule and not miss important timeline. At Oregon State University, a user-friendly calendar is created for parent and family of our student population, which includes important dates regarding academic success and fun campus events. For example, on the page for October 2019, the calendar shows October 6th as the last day to drop a fall term course with a 100% tuition refund, and the last day to add a fall term course online without departmental approval. These important dates could also be added to Canvas course modules or announcements, just as friendly reminders to students to make relevant decisions in time.

Parent & Family Calendar 2019-2020

Image 2: Oregon State University Parent & Family Calendar with important dates such as last drop to drop a course with 100% tuition refund; first date to register for a course for the coming term, etc.

It is true that there are plenty of resources on time management for students to learn by themselves. However, not all students know how to manage their time, even with the aid of digital tools. The problem is that when students are not required to make a detailed schedule for themselves, most of them will choose not to do it.  The other side of the problem is that there is very few activities which students are required to show instructors that they have planned/scheduled time for readings and all other study activities for the courses they are taking.  In Canvas, to train students in time management skills, instructors could give an assignment in week 1 to have students plan their weekly learning tasks for each of the 11 weeks. Students can use a word document, excel spreadsheet, apps, or google calendar to plan their time. Charlotte Kent (2018) suggests asking students to include sleep time, eat time, commute time, worry time, and free time and four to eight hours of study time per week per course. Yes, scheduling worry time and free time is part of the time management success trick!

Image 3: A color-coded google calendar example of scheduling study time for a student taking two courses online while working full time and raising children.

To sum it up, there are many ways instructors can help students to develop time management skills, instead of assuming it is individual students’ responsibility to learn how to manage time. Instructors could make estimated study time for each learning activity in a module/week. Instructors could require students to plan study time for the entire term at the beginning of the course. And instructors could recommend students to use apps and tools to help them manage time as well! If you have other ways to help students manage time well, feel free to contact me and share them with us: Tianhong.shi@oregonstate.edu.

 

References

Collier, Sabrina. (2018). Best Time-Management Apps for Students. Top Universities Blog.

https://www.topuniversities.com/blog/best-time-management-apps-students

Fetzner, Marie. (2013). What Do Unsuccessful Online Students Want Us to

Know? Journal of Asynchronous Learning Networks, 17(1), 13-27.

Kent, Charlotte. (2018). Teaching students to manage their time. Inside Higher Ed. September

18, 2018. Retrieved from https://www.insidehighered.com/advice/2018/09/18/how-teach-students-time-management-skills-opinion

Perez, M. (2019). September 2019 Oregon State University Ecampus Un-All-Staff meeting.

Oregon State University. (2019). Parent & Family Calendar 2019-2020. Retrieved from

https://families.oregonstate.edu/sites/families.oregonstate.edu/files/web_2019_nspfo_calendar.pdf

 

image of several birds sitting and one is moved to fly Photo by Nathan Dumlao on Unsplash.

My interest in learning about motivation in education began many years ago when I started learning about motivation in game design. In order to better understand motivation, in a classroom, while playing a game, and in an online learning environment, I am turning to the body of research that has grown from Edward Deci and Richard Ryan’s Self-Determination Theory (SDT). This blogpost will be a continuation of my previous SDT Primer and an excellent companion to Chris Lindberg’s Games as a Model for Motivation and Engagement series of posts.

While I had intended to use this entry for discussing grades and assessment, an important piece of SDT and its application is understanding the different types of motivation explored by the SDT community of researchers. This post will define and expand on the numerous types of motivation in preparation for a discussion on grades and assessment.

Before we begin, take a brief minute to explore and reflect about what moves you to do something? As an example, what moved you to open this blog post and begin reading it?

The Autonomy-Control Continuum

The types of motivation you might be most familiar with are intrinsic and extrinsic motivation. Intrinsic motivation refers to doing something because it is inherently interesting or enjoyable, while extrinsic motivation refers to doing something because it leads to a separable outcome. I might be moved to read a chapter of a novel over lunch because it is inherently enjoyable (intrinsic), or I might be moved to run errands over lunch because of external factors, like visiting the bank or post office due to their limited open hours (extrinsic). While these opposites are often displayed and discussed as an either-or, they are really just two ends of a spectrum that contains more nuanced gradations.

(Gagné & Deci, 2005, p. 336)

The autonomy-control continuum (Ryan & Deci, 2017) is an outgrowth of the intrinsic-extrinsic spectrum, representing the spectrum between autonomous regulation, or a feeling of complete volition and controlled regulation, or a feeling of being externally or internally compelled to act. While intrinsic motivation would fall under the category of autonomous regulation, extrinsic motivation can sometimes come close to the autonomy end of the spectrum for personally important or valued tasks, or can swing all the way to the controlling side with external rewards or punishments for tasks. And on the extreme opposite end of the spectrum from intrinsic motivation is amotivation, or the complete absence of intentional regulation. Ideally, we hope that students will feel autonomous motivation, which has also been shown as optimal for learning.

Internalized Motivations: External vs. Internal

Now let’s explore some of the murky gradations between feeling autonomous and controlled. The first step is to compare two degrees of controlled regulations: external vs. internal regulations. External regulation is motivation that is controlled by external factors—a student might experience external regulation when they have to complete a group project in a course. External factors, the instructor in this case, dictates that students collaborate in groups for this project. Internal regulation (or introjection), occurs when internally controlling factors are at play, e.g. shame, guilt, or fear. Continuing with the group work as an example, a student might feel moved to complete a task for the group project by placing internal pressure on themselves, resulting in feeling guilty if they don’t perceive that they’re pulling their weight, or shame in being the last group member to respond to a discussion assignment, or fear that their lack of activity will punish everyone in the group with a lower grade. In both cases, the student feels controlled, either by an external factor or internal pressure.

Identified & Integrated Regulations

As we move closer to the autonomy end of the spectrum, we run into identified regulation, or the acceptance of extrinsic value. Our student from the example above might feel extrinsically motivated to complete the group project, but through the use of a rationale statement from the instructor, might accept the value of this group work, thus feeling more of a sense of autonomy than with external or internal regulation. Lastly, and moving even closer to autonomy, is integrated regulation, or adding the value of a task to one’s own beliefs or sense of self. Perhaps through reflection or a particularly well designed group project, a student comes around and now believes that group work is an essential part of their desired educational experience. While integrated regulation is not the same as feeling autonomous, you might be able to imagine a situation where an identified or integrated regulation would feel more motivating than an external or internal regulation.

How to Begin Thinking About Grades

In a recent Q&A with Richard Ryan, one of the authors and lead researchers of SDT, responded that “there has been no empirical justification for why we have grades in schools at all.” My next blog post will dive deeper into the role that grades and assessment play in SDT and motivation. In the meantime, I would like to pose some questions to get you started thinking about how you use grades in relation to motivation in your courses:

  • Do you use grades to create external regulation of behavior in your course?
    • Are you grading a behavior or the demonstration of a skill?
  • Do you want to emphasize performance goals or mastery goals?
  • Are there ways to help students identify and integrate the activities and assessments in your course?
  • Do you need to grade this activity/assessment/task?

These are big, difficult questions! And thinking about motivation in terms of a spectrum is complicated! If you find yourself wanting to continue the discussion of motivation in course design, check out the companion blog series mentioned in the introduction above, contact your instructional designer, or keep an eye out for other opportunities to continue the discussion at various upcoming Ecampus events!

References & Resources

Center for Self-Determination Theory (CSDT). (2019).

  • This website is a treasure-trove of resources on SDT and its application in numerous fields, including education.

Gagné, M., & Deci, E. (2005). Self-determination theory and work motivation. Journal Of Organizational Behavior, 26(4), 331–362.

Ryan, R. M., & Deci, E. L. (2017). Self-Determination Theory: Basic psychological needs in motivation, development, and wellness. New York: Guilford Press.

A new Ecampus project has underscored the potential of graduate teaching assistants (GTAs) to add immense value to the process of reviewing and improving courses by thinking both as an instructor and as a student. Ecampus analyzed data and collaborated with partners in academic departments to identify five courses in which students were experiencing barriers to success that were not being addressed in Ecampus’ rigorous course development and support process. The academic departments also identified five pedagogically-minded and innovative GTAs to analyze and begin addressing the barriers to student success. The result is a pilot that we believe will be beneficial for the students, the GTA Fellows, and the faculty, while also providing learning opportunities for all stakeholders about how to tackle the most challenging course design problems. While we’re still in the first term of the pilot, our collaborative investigation process and emerging creative solutions have already made us very excited about the findings to come. 

Determining first steps 

At Oregon State Ecampus, we have a strong framework to help support and maintain course quality.  Courses are carefully and thoughtfully designed through a collaborative effort between a faculty course developer who has received training in online course design best practices and an Ecampus instructional designer. The development process spans from the two term initial design and build period, where we ensure courses meet our set of Ecampus Essentials, to iterative first term adjustments, to support for continued lifetime maintenance, to formal course “refreshes” every 3-5 years.  Finally, many of our courses are also submitted for and earn Quality Matters certification, which is an important indicator of quality based on research-based standards. This rigorous and supportive development process aims to make sure the course continues to stay relevant, accessible, and effective for learners.

But, in spite of all this careful planning, development, review, and maintenance, what is the appropriate response when courses have recently come through this rigorous and comprehensive design process, faculty have been trained in best practices for teaching online, and students are still encountering barriers to success in the course?

We recently launched a new pilot project to tackle this question head-on.  As a starting point, and using a few basic indicators of student persistence, retention, and success in our courses — such as the DFWU rate, or a rate at which students receive grades of D, F, W (withdrawal), or U (unsatisfactory)  — we created an initial list of courses across our online offerings where students were least successful in passing or completing. From this list, we identified which courses had been redesigned within the last five years (to rule out our standard redevelopment process as a solution to increasing student success).  The latter group of courses underwent some additional review by our team to identify if there were any stand-out issues that could be easily resolved.

What we arrived at was a short-list of courses that had higher than usual DFWU rates and were redesigned recently.  In these courses, we knew that something else was going on beneath the surface; the underlying problem was neither an obvious issue with design or facilitation techniques.  Many of the courses on this short-list are problematic not for a high rate of D/F/U grades at the end of the term, but rather for a high rate of W (withdrawal) grades. Our Ecampus student population is largely comprised of non-traditional students who have a different set of needs than our more traditional on-campus students, namely in that they need flexibility to balance their busy out-of-school lives while also meeting their educational goals; so, through this pilot, we wanted to find an effective way to determine what could be changed to better support Ecampus students in staying in (and succeeding in) these courses that were particularly challenging for reasons we could not easily identify.

Designing the pilot

With these course profiles compiled, we reached out to five department partners to assess their interest in collaborating on a project to further review and revise a course (or, in some cases, a sequence of courses).  We proposed to fund a Graduate Teaching Assistant (GTA) for three consecutive quarters to evaluate and then to propose and implement innovative interventions in these targeted courses with an eye toward increasing online student success.  In general, pilots are following the below schedule:

  • Quarter 1: the GTA is an active observer of the course(s), and reviews previous sections’ data to look for patterns in obstacles that students might face; in collaboration with the faculty course lead and Ecampus staff, the GTA then proposes a first set of interventions for quarter 2; IRB approval of research is confirmed if necessary for design of interventions and/or for desire for possible future publication.
  • Quarter 2: the GTA continues to be an active observer in the course(s) and helps to implement the first set of interventions; in collaboration with the faculty course lead and Ecampus staff, the GTA then proposes new or refined interventions for quarter 3.
  • Quarter 3: the GTA continues to be an active observer in the course(s) and helps the instructor to implement the new or refined interventions; data reporting is wrapped up and a campus presentation is arranged.

Note that, across the three quarters, the GTA does not undertake the traditional tasks associated with a teaching assistant in an online course, such as grading assignments, responding to student questions, holding virtual office hours, etc., modeling our pilot on fellowship programs such as Duke University’s Bass Digital Education Fellowships.  Rather, all stakeholders agreed to allow the GTA to not be constrained by these time-consuming tasks and focus their efforts instead on observational work and then planning and implementing interventions.  The instructors assigned to these courses continue to take on their regular duties of interacting with and assessing students.

The unique advantage of GTAs

With our five unique pilots underway as of this summer, it has already become clear that the key to this pilot is the unique positioning of the GTA to tackle these student success problems from both the faculty and student perspectives.  At Oregon State, GTAs regularly serve as teaching assistants or instructors of record in on-campus, hybrid, and online courses, so our GTAs have come to these pilot projects with prior teaching experience (and, often, with some training in pedagogy and course design).  Yet, our pilot program GTAs are also still students themselves, so they are particularly attuned to the student experience as they follow and track current and upcoming groups of students working through these courses.

Our pilot will also benefit from the fact that these GTAs have a strong interest in pedagogy and in their own professional development as instructors.  With that in mind, we have worked to structure some of the individualized goals of each pilot to reflect how we can help the GTA get the most value out of this opportunity (such as through a campus presentation, a published paper when we have results, or connecting with Ecampus leaders as possible references for job applications).  The final name for our pilot – GTA Innovations for Student Success Fellowship – is crafted both to reflect the central goals of the pilot (student success) and to call out the important and unique work that GTAs are doing as fellows.

Looking forward (to sharing innovative interventions and results)

We are still in the very early stages of each of these pilots, so while we don’t yet have any results to share, the deep engagement of our stakeholders in this process has been heartening, and wonderful plans are in the works for the first sets of interventions to be implemented this fall.  We are also so pleased to see the support behind allowing this group of GTAs inspire innovative online teaching within their home departments, and the willingness of the faculty who teach the courses under review to think collaboratively and differently about the creative ways we can support their online students.

As part of their pilot work, we will encourage these GTAs to contribute to the blog and share their insights and takeaways along the way.  What they learn about how to support student needs in these particularly challenging courses and course sequences, learning design, teaching methods that better motivate disengaged learners, etc. will no doubt be useful to Ecampus stakeholders across the university and beyond.  Stay tuned for more!

Introduction

For those who work in higher education, it may not come as a surprise that the field of instructional design has grown in tandem with the expansion of online programs and courses. Evidence of this growth abounds. While the discipline of instructional design has expanded rapidly in recent years, the history of instructional design is not well known by those outside of the field.

This post will cover a brief history of instructional design with a particular emphasis on design: What influences design? How are design decisions made? How has the way we approached design changed over time? We’ll also consider how instructional designers actually design courses and the importance of course structure as an inclusive practice.

Instructional Design: Theory and History

Every instructional design curriculum teaches three general theories or theoretical frameworks for learning: behaviorism, cognitivism, and constructivism. While an instructional designer (ID) probably wouldn’t call herself a cognitivist or a behaviorist, for example, these theories influence instructional design and the way IDs approach the design process.

The field of instructional design is widely believed to have originated during World War II, when training videos like this one were created to prepare soldiers with the knowledge and skills they would need in battle. This form of audio-visual instruction, although embraced by the military, was not initially embraced by schools.

B.F. Skinner
“B.F. Skinner” Portrait Art Print
by Xiquid

In the 1950s, behaviorists, such as B.F. Skinner, dominated popular thought on how to teach and design instruction. For behaviorists, learning results in an observable change in behavior. The optimal design of a learning environment from a behaviorist perspective would be an environment that increases student motivation for learning, provides reinforcement for demonstrating learning, and removes distractions. Behaviorists are always designing for a specific response, and instruction is intended to teach discrete knowledge and skills. For behaviorists, motivation is critical, but only important to the extent that it elicits the desired behavior. 

Cognitivism was largely a response to behaviorism. Cognitivists emphasized the role of cognition and the mind; they acknowledged that, when designing learning environments, there is more to consider than the content to be learned. More than environmental factors and instructional components, the learners’ own readiness, or prior knowledge, along with their beliefs and attitudes, require consideration. Design, from a cognitivist approach, often emphasizes preparedness and self-awareness. Scaffolding learning and teaching study skills and time-management (metacognitive skills) are practices grounded in a cognitivist framework.

While cognitivists emphasize the learner experience, and in particular, acknowledge that learners’ existing knowledge and past histories influence their experience, the learner is still receiving information and acting on it–responding to carefully designed learning environments.

Constructivism, the most current of the three frameworks, on the other hand, emphasizes that the learner is constructing their own understanding of the world, not just responding to it. Learners are activity creating knowledge as they engage with the learning environment.

All–or nearly all–modern pedagogical approaches are influenced by these theoretical frameworks for learning.

Design Approaches

A single course can be seen as a microcosm of theoretical frameworks, historical models, and value-laden judgements of pedagogical approaches

Learning theories are important because they influence our design models, but by no means are learning theories the only factor guiding design decisions. In our daily work, IDs rely on many different tools and resources. Often, IDs will use multiple tools to make decisions and overcome design challenges. So, how do we accomplish this work in practice?

  1. We look to established learning outcomes. We talk about learning goals and activities with faculty. We ask questions to guide decision making about how to meet course learning outcomes through our course design.
  2. We look to research-based frameworks and pedagogical approaches such as universal design for learning (UDL), inclusive design, active learning, student-centered design, and many other models. These models may be influenced by learning theory, but they are more practical in nature.
  3. We look to human models. We often heed advice and follow the examples our more experienced peers.
  4. We look to our own past experiences and solutions that have worked in similar situations, and we apply what we learned to future situations.
  5. We make professional judgements; judgements rooted in our tacit knowledge of what we believe “good design” looks like. For better or for worse, we follow our intuition. Our gut.

Over time, one can see that instructional design has evolved from an emphasis on teaching discrete knowledge and skills that can be easily measured (behaviorism) to an emphasis on guiding unique learners to actively create their own understanding (constructivism). Design approaches, however, are not as straightforward as simply taking a theory and applying it to a learning situation or some course material. Instructional design is nuanced. It is art and science. A single course can be seen as a microcosm of theoretical frameworks, historical models, and value-laden judgements of pedagogical approaches–as well as value-laden judgements of disciplinary knowledge and its importance. But. That’s another blog post.

Design Structure to Meet Diverse Needs

Meeting diverse needs, however, does not necessitate complexity in course design

If learners are unique, if learning can’t be programmed, if learning environments must be adaptable, if learners are constructing their own knowledge, how is all of this accommodated in a course design?

Designing from a modern constructivist perspective, from the viewpoint that students have vastly different backgrounds, past experiences, and world-views, requires that many diverse needs be accommodated in a single course. Meeting diverse needs, however, does not necessitate complexity in course design. Meeting diverse needs means that we need to provide support, so that it is there for those who need it, but not distracting to those who don’t need it. Design needs to be intuitive and seamless for the user.

Recent research on inclusive practices in design and teaching identify structure as an inclusive practice. Design can be viewed as a way of applying, or ensuring, a course structure is present. In that way, working with an instructional designer will make your course more inclusive. But, I digress. Or, do I?

Sathy and Hogan contend, in their guide, that structure benefits all students, but some, particularly those from underrepresented groups, benefit disproportionately. Conversely, not enough structure, leaves too many students behind. Since many of the same students who benefit from additional course structure also succeed a lower rates, providing course structure may also help to close the achievement gap.

How are We Doing This?

The good news is that Ecampus is invested in creating courses that are designed–or structured–in a way that meets the needs of many different learners. Working with an Ecampus instructional designer will ensure that your course materials are clearly presented to your students. In fact, many of the resources we provide–course planning templates, rubrics, module outlines, consistent navigation in Canvas, course banners and other icons and visual cues–are intended to ensure that your students navigate your course materials and find what they need, when they need it.

References

Icons made by phatplus and Freepik from www.flaticon.com are licensed by CC 3.0 BY

Boling, E., Alangari, H., Hajdu, I. M., Guo, M., Gyabak, K., Khlaif, Z., . . . Techawitthayachinda, R. (2017). Core Judgments of Instructional Designers in Practice. Performance Improvement Quarterly, 30(3), 199-219. doi:10.1002/piq.21250

Eddy, S.L. and Hogan, K. A. (2017) “Getting Under the Hood: How and for Whom Does Increasing Course Structure Work?” CBE—Life Sciences Education. Retrieved from https://www.lifescied.org/doi/10.1187/cbe.14-03-0050

Sathy, V. and Hogan, K.A. (2019). “Want to Reach All of Your Students? Here’s How to Make Your Teaching More Inclusive: Advice Guide. Chronicle of Higher Education. Retrieved from https://www.chronicle.com/interactives/20190719_inclusive_teaching

Tanner, K.D. (2013) “Structure Matters: Twenty-One Teaching Strategies to Promote Student Engagement and Cultivate Classroom Equity,” CBE—Life Sciences Education. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3762997/

Traveler items

Traveler itemsHave you ever taken a trip with a tour group? Or looked at an itinerary of places and activities to see if it meets your expectations and/or fits into your schedule? Most guided tours include an itinerary with a list of destinations, activities, and time allotted. This helps travelers manage their expectations and time.

Now, have you ever thought of an online course as a guided trip? The instructor is similar to a tour guide, leading student travelers to their destination. And, like travelers, students naturally want to know what to expect and how much time to commit to their learning. They could benefit from a detailed itinerary, or schedule of activities, that includes estimated time commitment for each week.

As an instructional designer for hybrid and online courses, I like to include a detailed schedule for each week to help students organize their time and stay on task. In order to determine what is on that schedule, I begin the design process with a draft of the course syllabus that outlines where the students are headed (learning outcomes) and how the instructor knows they arrived (assessments). This draft helps me understand the instructor’s plans for the course. Together, we look at the learning outcomes and assessments, as well as course requirements like credit hours to determine appropriate learning activities along the way. The course credit hours inform the workload requirements for students.  For example, Oregon State University is on the quarter system and their policy states that one credit hour is equivalent to 3-4 hours of course work each week. If a course is worth 3 credit hours, then students should expect to dedicate 9-12 hours each week to their course. I use a workload estimator created by The Center for Teaching Excellence at Rice University to help with the estimates. This tool provides a reasonable estimation of the workload expectations for students and can be used to verify whether the course meets the university’s guidelines for the assigned credit hours. (For more information on how the estimates are made, see the Rice University CTE blog post.)

While all of this information is useful to instructors, I also encourage them to share a weekly list of activities along with the calculations with students. Tour guides provide detailed schedules informing travelers where they are going, the order of the activities, and the time allotted to each activity, why not do that for students? Below, I’ve included a sample for how I do this in my courses. I create a weekly table on an introduction page at the beginning of each module within our LMS. This table includes a suggested order of the activities, the estimated time commitment to complete the activities, along with the official due dates. Anecdotally, students appreciate the schedule and use it to manage their time. I encourage you to consider using a detailed schedule with your future courses.

Example of a weekly Detailed Schedule

References

Rice Blog: https://cte.rice.edu/blogarchive/2016/07/11/workload

Barre, E. (2016, July 11). How much should we assign? Estimating out of class workload [Blog post]. Retrieved from http://cte.rice.edu/blogarchive/2016/07/11/workload.

Photo by Dariusz Sankowski on Unsplash

This post is the second in a three-part series that summarizes conclusions and insights from research of active, blended, and adaptive learning practices. Part one covered active learning, and today’s article focuses on the value of blended learning.

First Things First

What, exactly, is “blended” learning? Dictionary.com defines it as a “style of education in which students learn via electronic and online media as well as traditional face-to-face learning.” This is a fairly simplistic view, so Clifford Maxwell (2016), on the Blended Learning Universe website, offers a more detailed definition that clarifies three distinct parts:

  1. Any formal education program in which at least part of the learning is delivered online, wherein the student controls some element of time, place, path or pace.
  2. Some portion of the student’s learning occurs in a supervised physical location away from home, such as in a traditional on-campus classroom.
  3. The learning design is structured to ensure that both the online and in-person modalities are connected to provide a cohesive and integrated learning experience.

It’s important to note that a face-to-face class that simply uses an online component as a repository for course materials is not true blended learning. The first element in Maxwell’s definition, where the student independently controls some aspect of learning in the online environment, is key to distinguishing blended learning from the mere addition of technology.

You may also be familiar with other popular terms for blended learning, including hybrid or flipped classroom. Again, the common denominator is that the course design intentionally, and seamlessly, integrates both modalities to achieve the learning outcomes.

Let’s examine what the research says about the benefits of combining asynchronous, student-controlled learning with instructor-driven, face-to-face teaching.

Does Blended Learning Offer Benefits?

Blended Learning Icon

The short answer is yes.

The online component of blended learning can help “level the playing field.” In many face-to-face classes, students may be too shy or reluctant to speak up, ask questions, or offer an alternate idea. A blended environment combines the benefit of giving students time to compose thoughtful comments for an online discussion without the pressure and think-on-your-feet demand of live discourse, while maintaining direct peer engagement and social connections during in-classroom sessions (Hoxie, Stillman, & Chesal, 2014). Blended learning, through its asynchronous component, allows students to engage with materials at their own pace and reflect on their learning when applying new concepts and principles (Margulieux, McCracken, & Catrambone, 2015).

Since well-designed online learning produces equivalent outcomes to in-person classes, lecture and other passive information can be shifted to the online format, freeing up face-to-face class time for active learning, such as peer discussions, team projects, problem-based learning, supporting hands-on labs or walking through simulations (Bowen, Chingos, Lack, & Nygren, 2014). One research study found that combining online activities with in-person sessions also increased students’ motivation to succeed (Sithole, Chiyaka, & McCarthy, 2017).

What Makes Blended Learning So Effective?

Five young people studying with laptop and tablet computers on white desk. Beautiful girls and guys working together wearing casual clothes. Multi-ethnic group smiling.

Nearly all the research reviewed concluded that blended learning affords measurable advantages over exclusively face-to-face or fully online learning (U.S. Department of Education, Office of Planning, Evaluation, and Policy Development, 2009). The combination of technology with well-designed in-person interaction provides fertile ground for student learning. Important behaviors and interactions such as instructor feedback, assignment scaffolding, hands-on activities, reflection, repetition and practice were enhanced, and students also gained advantages in terms of flexibility, time management, and convenience (Margulieux, McCracken, & Catrambone, 2015).

Blended learning tends to benefit disadvantaged or academically underprepared students, groups that typically struggle in fully online courses (Chingosa, Griffiths, Mulhern, and Spies, 2017). Combining technology with in-person teaching helped to mitigate some challenges faced by many students in scientific disciplines, improving persistence and graduation rates. And since blended learning can be supportive for a broader range of students, it may increase retention and persistence for underrepresented groups, such as students of color (Bax, Campbell, Eabron, & Thomson, 2014–15).

Blended learning  benefits instructors, too. When asked about blended learning, most university faculty and instructors believe it to be more effective (Bernard, Borokhovski, Schmid, Tamim, & Abrami, 2014). The technologies used often capture and provide important data analytics, which help instructors more quickly identify under-performing students so they can provide extra support or guidance (McDonald, 2014). Many online tools are interactive, fun and engaging, which encourages student interaction and enhances collaboration (Hoxie, Stillman, & Chesal, 2014). Blended learning is growing in acceptance and often seen as a favorable approach because it synthesizes the advantages of traditional instruction with the flexibility and convenience of online learning (Liu, et al., 2016).

A Leap of Faith

Is blended learning right for your discipline or area of expertise? If you want to give it a try, there are many excellent internet resources available to support your transition.

Though faculty can choose to develop a blended class on their own, Oregon State instructors who develop a hybrid course through Ecampus receive full support and resources, including collaboration with an instructional designer, video creation and media development assistance. The OSU Center for Teaching and Learning offers workshops and guidance for blended, flipped, and hybrid classes. The Blended Learning Universe website, referenced earlier, also provides many resources, including a design guide, to support the transformation of a face-to-face class into a cohesive blended learning experience.

If you are ready to reap the benefits of both online and face-to-face teaching, I urge you to go for it! After all, the research shows that it’s a pretty safe leap.

For those of you already on board with blended learning, let us hear from you! Share your stories of success, lessons learned, do’s and don’ts, and anything else that would contribute to instructors still thinking about giving blended learning a try.

Susan Fein, Oregon State University Ecampus Instructional Designer
susan.fein@oregonstate.edu | 541-747-3364

References

  • Bax, P., Campbell, M., Eabron, T., & Thomson, D. (2014–15). Factors that Impede the Progress, Success, and Persistence to Pursue STEM Education for Henderson State University Students Who Are Enrolled in Honors College and in the McNair Scholars Program. Henderson State University. Arkadelphia: Academic Forum.
  • Bernard, R. M., Borokhovski, E., Schmid, R. F., Tamim, R. M., & Abrami, P. C. (2014). A meta-analysis of blended learning and technology use in higher education: From the general to the applied. J Comput High Educ, 26, 87–122.
  • Bowen, W. G., Chingos, M. M., Lack, K. A., & Nygren, T. I. (2014). Interactive learning online at public universities: Evidence from a six-campus randomized trial. Journal of Policy Analysis and Management, 33(1), 94–111.
  • Chingosa, M. M., Griffiths, R. J., Mulhern, C., & Spies, R. R. (2017). Interactive online learning on campus: Comparing students’ outcomes in hybrid and traditional courses in the university system of Maryland. The Journal of Higher Education, 88(2), 210-233.
  • Hoxie, A.-M., Stillman, J., & Chesal, K. (2014). Blended learning in New York City. In A. G. Picciano, & C. R. Graham (Eds.), Blended Learning Research Perspectives (Vol. 2, pp. 327-347). New York: Routledge.
  • Liu, Q., Peng, W., Zhang, F., Hu, R., Li, Y., & Yan, W. (2016). The effectiveness of blended learning in health professions: Systematic review and meta-analysis. Journal of Medical Internet Research, 18(1). doi:10.2196/jmir.4807
  • Maxwell, C. (2016, March 4). What blended learning is – and isn’t. Blog post. Retrieved from Blended Learning Universe.
  • Margulieux, L. E., McCracken, W. M., & Catrambone, R. (2015). Mixing in-class and online learning: Content meta-analysis of outcomes for hybrid, blended, and flipped courses. In O. Lindwall, P. Hakkinen, T. Koschmann, & P. Tchoun (Ed.), Exploring the Material Conditions of Learning: Computer Supported Collaborative Learning (CSCL) Conference (pp. 220-227). Gothenburg, Sweden: The International Society of the Learning Sciences.
  • McDonald, P. L. (2014). Variation in adult learners’ experience of blended learning in higher education. In Blended Learning Research Perspectives (Vol. 2, pp. 238-257). Routledge.
  • Sithole, A., Chiyaka, E. T., & McCarthy, P. (2017). Student attraction, persistence and retention in STEM programs: Successes and continuing challenges. Higher Education Studies, 7(1).
  • U.S. Department of Education, Office of Planning, Evaluation, and Policy Development. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. Washington, D.C.

Image Credits

  • Blended Learning Icon: Innovation Co-Lab Duke Innovation Co-Lab [CC0]
  • Leap of Faith: Photo by Denny Luan on Unsplash
  • School photo created by javi_indy – www.freepik.com

There are many benefits to using rubrics for both instructors and students, as discussed in Rubrics Markers of Quality Part 1 – Unlock the Benefits. Effective rubrics serve as a tool to foster excellence in teaching and learning, so let’s take a look at some best practices and tips to get you started.

Best Practices

Alignment

Rubrics should articulate a clear connection between how students demonstrate learning and the (CLO) Course Learning Outcomes. Solely scoring gateway criteria, the minimum expectations for a task, (e.g., word count, number of discussion responses) can be alluring. Consider a rubric design to move past minimum expectations and assess what students should be able to do after completing a task.

Detailed, Measurable, and Observable

Clear and specific rubrics have the potential to communicate to how to demonstrate learning, how performance evaluation measures, and markers of excellence. The details provide students with a tool to self-assess their progress and level up their performance autonomously.

Language Use

Rubrics create the opportunity to foster an inclusive learning environment. Application of clear and consistent language takes into consideration a diverse student composition. Online students hail from around the world and speak various native languages. Learners may interpret the meaning of different words differently. Use simple terms with specific and detailed descriptions. Doing so creates space for students to focus on learning instead of decoding expectations. Additionally, consider the application of parallel language consistently. The use of similar language (e.g. demonstrates, mostly demonstrates, and doesn’t demonstrate) across each criterion can be helpful to differentiate between each performance level.

Tips of the Trade!

Suitability

Consider the instructional aim, learning outcomes, and the purpose of a task when choosing the best rubric for your course.

  • Analytic Rubrics: The hallmark design of an analytic rubric evaluates performance criteria separately. Characteristically this rubric’s structure is a grid, and evaluation of performance scores are on a continuum of levels. Analytic rubrics are detailed, specific, measurable, and observable. Therefore, this rubric type is an excellent tool for formative feedback and assessment of learning outcomes.
  • Holistic Rubrics: Holistic rubrics evaluate criteria together in one general description for each performance level. Ideally, this rubric design evaluates the overall quality of a task.  Consider the application of a holistic rubric can when an exact answer isn’t needed, when deviation or errors are allowed, and for interpretive/exploratory activities.
  • General Rubrics: Generalized rubrics can be leveraged to assess multiple tasks that have the same learning outcomes (e.g., reflection paper, journal). Performance dimensions focus solely on outcomes versus discrete task features.

Explicit Expectations

Demystifying expectations can be challenging.  Consider articulating performance expectations in the task description before deploying a learning task. Refrain from using rubrics as a standalone vehicle to communicate expectations. Unfortunately, students may miss the rubric all together and fail to meet expectations. Secondly, make the implicit explicit! Be transparent. Provide students with all the information and tools they need to be successful from the outset.

Iterate

A continuous improvement process is a key to developing high-quality assessment rubrics. Consider multiple tests and revisions of the rubric. There are several strategies for testing a rubric. 1) Consider asking students, teaching assistants, or professional colleagues to score a range of work samples with a rubric. 2) Integrate opportunities for students to conduct self-assessments. 3) Consider assessing a task with the same rubric between course sections and academic terms. Reflect on how effectively and accurately the rubric performed, after testing is complete. Revise and redeploy as needed.

Customize

Save some time, and don’t reinvent the wheel. Leverage existing samples and templates. Keep in mind that existing resources weren’t designed with your course in mind. Customization will be needed to ensure the accuracy and effectiveness of the rubric.

Are you interested in learning more about rubrics and how they can enrich your course? Your Instructional Designer can help you craft effective rubrics that will be the best fit for your unique course.

References

Additional Resources

The Basics
Best Practices
Creating and Designing Rubrics

One of the most common questions I get as an Instructional Designer is, “How do I prevent cheating in my online course?” Instructors are looking for detection strategies and often punitive measures to catch, report, and punish academic cheaters. Their concerns are understandable—searching Google for the phrase “take my test for me,” returns pages and pages of results from services with names like “Online Class Hero” and “Noneedtostudy.com” that promise to use “American Experts” to help pass your course with “flying grades.” 1 But by focusing only on what detection measures we can implement and the means and methods by which students are cheating, we are asking the wrong questions. Instead let’s consider what we can do to understand why students cheat, and how careful course and assessment design might reduce their motivation to do so.

A new study published in Computers & Education identified five specified themes in analyzing the reasons students provided when seeking help from contract cheating services (Amigud & Lancaster, 2019):

  • Academic Aptitude – “Please teach me how to write an essay.”
  • Perseverance – “I can’t look at it anymore.”
  • Personal Issues – “I have such a bad migraine.”
  • Competing Objectives – “I work so I don’t have time.”
  • Self-Discipline – “I procrastinated until today.”

Their results showed that students don’t begin a course with the intention of academic misconduct. Rather, they reach a point, often after initially attempting the work, when the perception of pressures, lack of skills, or lack of resources removes their will to complete the course themselves. Online students may be more likely to have external obligations and involvement in non-academic activities. According to a 2016 study, a significant majority of online students are often juggling other obligations, including raising children and working while earning their degrees (Clinefelter & Aslanian, 2016).

While issues with cheating are never going to be completely eliminated, several strategies have emerged in recent research that focuses on reducing cheating from a lens of design rather than one of punishment. Here are ten of my favorite approaches that speak to the justifications identified by students that led to cheating:

  1. Make sure that students are aware of academic support services (Yu, Glanzer, Johnson, Sriram, & Moore, 2018). Oregon State, like many universities, offers writing help, subject-area tutors and for Ecampus students, a Student Success team that can help identify resources and provide coaching on academic skills. Encourage students, leading up to exams or big assessment projects, to reach out during online office hours or via email if they feel they need assistance.
  2. Have students create study guides as a precursor assignment to an exam—perhaps using online tools to create mindmaps or flashcards. Students who are better prepared for assessments have a reduced incentive to cheat. Study guides can be a nongraded activity, like a game or practice quiz, or provided as a learning resource.
  3. Ensure that students understand the benefits of producing their own work and that the assessment is designed to help them develop and demonstrate subject knowledge (Lancaster & Clarke, 2015). Clarify for students the relevance of a particular assessment and how it relates to the weekly and larger course learning outcomes.
  4. Provide examples of work that meets your expectations along with specific evaluation criteria. Students need to understand how they are being graded and be able to judge the quality of their own work. A student feeling in the dark about what is expected from them may be more likely to turn to outside help.
  5. Provide students with opportunities throughout the course to participate in activities, such as discussions and assignments, that will prepare them for summative assessments (Morris, 2018).
  6. Allow students to use external sources of information while taking tests. Assessments in which students are allowed to leverage the materials they have learned to construct a response do a better job of assessing higher order learning. Memorizing and repeating information is rarely what we hope students to achieve at the end of instruction.
  7. Introduce alternative forms of assessment. Creative instructors can design learning activities that require students to develop a deeper understanding and take on more challenging assignments. Examples of these include recorded presentations, debates, case studies, portfolios, and research projects.
  8. Rather than a large summative exam at the end of a course, focus on more frequent smaller, formative assessments (Lancaster & Clarke, 2015). Provide students with an ongoing opportunity to demonstrate their knowledge without the pressure introduced by a final exam that accounts for a substantial portion of their grade.
  9. Create a course environment that is safe to make and learn from mistakes. Build into a course non-graded activities in which students can practice the skills they will need to demonstrate during an exam.
  10. Build a relationship with students. When instructors are responsive to student questions, provide substantive feedback throughout a course and find other ways to interact with students — they are less likely to cheat. It matters if students believe an instructor cares about them (Bluestein, 2015).

No single strategy is guaranteed to immunize your course against the possibility that a student will use some form of cheating. Almost any type of assignment can be purchased quickly online. The goal of any assessment should be to ensure that students have met the learning outcomes—not to see if we can catch them cheating. Instead, focus on understanding pressures a student might face to succeed in a course, and the obstacles they could encounter in doing so. Work hard to connect with your students during course delivery and humanize the experience of learning online. Thoughtful design strategies, those that prioritize supporting student academic progress, can alleviate the conditions that lead to academic integrity issues.


1 This search was suggested by an article published in the New England Board of Higher Education on cheating in online programs. (Berkey & Halfond, 2015)

References

Amigud, A., & Lancaster, T. (2019). 246 reasons to cheat: An analysis of students’ reasons for seeking to outsource academic work. Computers & Education, 134, 98–107. https://doi.org/10.1016/j.compedu.2019.01.017

Berkey, D., & Halfond, J. (2015). Cheating, student authentication and proctoring in online programs.

Bluestein, S. A. (2015). Connecting Student-Faculty Interaction to Academic Dishonesty. Community College Journal of Research and Practice, 39(2), 179–191. https://doi.org/10.1080/10668926.2013.848176

Clinefelter, D. D. L., & Aslanian, C. B. (2016). Comprehensive Data on Demands and Preferences. 60.

Lancaster, T., & Clarke, R. (2015). Contract Cheating: The Outsourcing of Assessed Student Work. In T. A. Bretag (Ed.), Handbook of Academic Integrity (pp. 1–14). https://doi.org/10.1007/978-981-287-079-7_17-1

Morris, E. J. (2018). Academic integrity matters: five considerations for addressing contract cheating. International Journal for Educational Integrity, 14(1), 15. https://doi.org/10.1007/s40979-018-0038-5

Yu, H., Glanzer, P. L., Johnson, B. R., Sriram, R., & Moore, B. (2018). Why College Students Cheat: A Conceptual Model of Five Factors. The Review of Higher Education, 41(4), 549–576. https://doi.org/10.1353/rhe.2018.0025

Oregon State University’s Learning Management System (LMS) migrated to Canvas in 2014-2015. The Canvas migration was based not only on the company’s feature alignment with our learning platform needs but also on the outstanding customer service Canvas Instructure has provided to our LMS user community including students, faculty, instructional designers, and administrators. How Canvas provides customer service offers an example we can model to continue to exceed student expectations.

According to Michael Feldstein’s July 8, 2018 report, major players in US LMS market include Blackboard, Canvas, Moodle, Bright Space, Sakai, Schoology, and others (Feldstein, 2018).

LMS Market share in North America

Figure 1: US Primary LMS Systems, July 6th, 2018 (Feldstein, 2018)

 

Of these major players in the LMS field, Canvas is most noticeable with fastest growth in market share among U.S. and Canadian higher education institutions.

LMS history and Market Share

Figure 2. LMS Market Share for US and Canadian Higher Ed Institutions (Feldstein, 2018)

 

Different people suggest different criteria when comparing LMSs. Udutu.com provided a list of 7 things to think about before purchasing a LMS:

  1. Be clear on your learning and training objectives;
  2. Don’t be fooled by the high costs of an LMS;
  3. Know the limitations of your internal team and users;
  4. Pay for the features you need, not for what you might need;
  5. The latest new technology is not necessarily the best one;
  6. Customer support is everything; and
  7. Trust demos and trials over reviews, ratings and “industry experts”

(Udutu, 2016).  Noud (2016) suggested the following ten factors to consider when selecting a LMS:

  1. Unwanted Features;
  2. Mobile Support;
  3. Integrations (APIs, SSO);
  4. Customer Support;
  5. Content Support;
  6. Approach to pricing;
  7. Product roadma;
  8. Scalability, Reliability and Security;
  9. Implementation Timeframe; and
  10. Hidden costs.

Christopher Pappas (2017) suggested 9 factors to consider when calculating your LMS budget:

  1. Upfront costs;
  2. LMS training;
  3. Monthly Or Annual Licensing Fees;
  4. Compatible eLearning Authoring Tools;
  5. Pay-per-User/Learner Fee;
  6. Upgrades and Add-Ons;
  7. Learning and Development Team Payroll;
  8. Online Training Development Costs; and
  9. Ongoing Maintenance.

Of all of the above lists, I like Udutu’s list the best because it matches with my personal experiences with LMS migrations.

I first used WebCT between 2005 and 2007, participated in migrating from WebCT Vista to Blackboard in 2008, and Angel to Blackboard migration in 2013-2014.  During my seven years of using Blackboard as instructional designer and faculty support staff, my biggest complaint with Blackboard was its unexpected server outages during peak times such as beginning of the term and final’s weeks. In 2014, I moved to Oregon State University (OSU). The OSU community was looking for a new LMS in 2013 and started piloting Canvas in 2014. At the end of the pilot, instructor and student feedback was mostly positive. Not subject to local server outages, the cloud-based system was stable and had remained available to users throughout the pilot. Of course no LMS is perfect. But after careful comparison and feedback collection, we migrated from Blackboard to Canvas in 2015. So far in my four years of using Canvas, there has not been a single server outage. Canvas has the basic functionality of a LMS.

Canvas wanted to expand their market share by building up positive customer experiences. They were eager to please OSU and they provided us with 24/7 on-call customer service during our first two years of using Canvas, at a relatively reasonable price. The pilot users were all super satisfied with their customer service. Several instructors reported that they contacted Canvas hotline on Thanksgiving or Christmas, and their calls were answered immediately, and their issues were resolved.

Michael Feldstein (2018) summarized that Canvas’ “cloud-based offering, updated user interface, reputation for outstanding customer service and brash, in-you-face branding” have helped its steady rise in the LMS market share. As instructors and instructional designers, we can learn a lot from the CANVAS INSTRUCTURE’s success story and focus on improving the service we provide to our students, such as student success coaching, online recourses, online learning communities, etc. Would you agree with me on this? If you have specific suggestions on how to improve the way we serve our students, feel free to let us know (Tianhong.shi@oregonstate.edu ; @tianhongshi) !

 

References:

Goldberg, M., Salari, S. & Swoboda, P. (1996) ‘World Wide Web – Course Tool: An Environment for Building WWW-Based Courses’ Computer Networks and ISDN Systems, 28:7-11 pp1219-1231

Feldstein, Michael. (2018). Canvas surpasses Blackboard Learn in US Market Share. E-Literate, July 8, 2018. Retrieved from https://mfeldstein.com/canvas-surpasses-blackboard-learn-in-us-market-share/ on February 2, 2019.

McKenzie, Lindsay. (2018). Canvas catches, and maybe passes, Blackboard. InsideHigherEd. July 10, 2018. Retrieved from https://www.insidehighered.com/digital-learning/article/2018/07/10/canvas-catches-and-maybe-passes-blackboard-top-learning on February 2, 2019.

Moran, Gwen (October 2010). “The Rise of the Virtual Classroom”Entrepreneur Magazine. Irvine, California. Retrieved July 15, 2011.

Noud, Brendan. (February 9, 2016). 10 Things to consider when selecting an LMS. Retrieved from https://www.learnupon.com/blog/top-10-considerations-when-selecting-a-top-lms/ on February 2, 2019.

Pappas, Christopher. (June 13, 2017). Top 9 Factors to consider when calculating Your LMS Budget. Retrieved from https://blog.lambdasolutions.net/top-9-factors-to-consider-when-calculating-your-lms-budget on February 2, 2019.

Udutu. (May 30, 2016). How to choose the best Learning Management System. Retrieved from https://www.udutu.com/blog/lms/ on February 2, 2019.

Wikipedia. (n.d.). WebCT. Retrieved from https://en.wikipedia.org/wiki/WebCT on February 2, 2019.

 

Would you like to save time grading, accurately assess student learning, provide timely feedback, track student progress, demonstrate teaching and learning excellence, foster communication, and much more? If you answered yes, then rubrics are for you! Let’s explore why the intentional use of rubrics can be a valuable tool for instructors and students.

Value for instructors

  • Time management: Have you ever found yourself drowning in a sea of student assignments that need to be graded ASAP (like last week)?  Grading with a rubric can quicken the process because each student is graded in the same way using the same criteria. Rubrics which are detailed, specific, organized and measurable clearly communicate expectations. As you become familiar with how students are commonly responding to an assessment, feedback can be easily personalized and readily deployed.
  • Timely and meaningful feedback: Research has shown that there are several factors that enhance student motivation. One factor is obtaining feedback that is shared often, detailed, timely, and useful. When students receive relevant, meaningful, and useful feedback quickly they have an opportunity to self-assess their progress, course correct (if necessary), and level up their performance.
  • Data! Data! Data! Not only can rubrics provide a panoramic view of student progress, but the tool can also help identify teaching and learning gaps. Instructors will be able to identify if students are improving, struggling, remaining consistent, or if they are missing the mark completely. The information gleaned from rubrics can be utilized to compare student performance within a course, between course sections, or even across time. As well as, the information can serve as feedback to the instructor regarding the effectiveness of the assessment.
  • Effectiveness: When a rubric is designed from the outset to measure the course learning outcomes the rubric can serve as a tool for effective, and accurate, assessment. Tip! Refrain from solely scoring gateway criteria (i.e. organization, mechanics, and grammar). Doing so is paramount because students will interpret meeting the criteria as a demonstration that they have met the learning outcomes even if they haven’t. If learning gaps are consistently identified consider evaluating the task and rubric to ensure instructions, expectations, and performance dimensions are clear and aligned.
  • Shareable: As academic programs begin to develop courses for various modalities (i.e. on campus, hybrid, online) consistently assessing student learning can be a challenge. The advantage of rubrics is they can be easily shared and applied between course sections and modalities. Doing so can be especially valuable when the same course is taught by multiple instructors and teaching assistants.
  • Fosters communication: Instructors can clearly articulate performance expectations and outcomes to key stakeholders such as teaching assistants, instructors, academic programs, and student service representatives (e.g. Ecampus Student Success Team, Writing Center). Rubrics provide additional context above and beyond what is outlined in the course syllabus. A rubric can communicate how students will be assessed, what students should attend to, and how institutional representatives can best help support students. Imagine a scenario where student contacts the Writing Center with the intent of reviewing a draft term paper, and the representative asks for the grading criteria or rubric. The grading criteria furnished by the instructor only outlines the requirements for word length, formatting, and citation conventions. None of the aforementioned criteria communicate the learning outcomes or make any reference to the quality of the work. In this example, the representative might find it challenging to effectively support the student without understanding the instructor’s implicit expectations.
  • Justification: Have you ever been tasked with justifying a contested grade? Rubrics can help you through the process! Rubrics which are detailed, specific, measurable, complete, and aligned can be used to explain why a grade was awarded. A rubric can quickly and accurately highlight where a student failed to meet specific performance dimensions and/ or the learning outcomes.
  • Evidence of teaching improvement: The values of continuous improvement, lifelong learning, and ongoing professional development are woven into the very fabric of academia. Curating effective assessment tools and methods can provide a means of demonstrating performance and providing evidence to support professional advancement.

Value for students

  • Equity: Using rubrics creates an opportunity for consistent and fair grading for all students. Each student is assessed on the same criteria and in the same way. If performance criteria are not clearly communicated from the outset then evaluations may be based on implicit expectations. Implicit expectations are not known or understood by students, and it can create an unfair assessment structure.
  • Clarity: Ambiguity is decreased by using student-centered language. Student composition is highly diverse, and many students speak different native languages. Therefore, students may have different interpretations as to what words mean (e.g. critical thinking). Using very clear and simplistic language can mitigate unintended barriers and decrease confusion.
  • Expectations: Students know exactly what they need to do to demonstrate learning, what instructors are looking for, how to meet the instructor’s expectations, and how to level up their performance. A challenge can be to ensure that all expectations (implicit and explicit) are clearly communicated to students. Tip! Consider explaining expectations in the description of the task as well.
  • Skill development: Rubrics can introduce new concepts/ terminology and help students develop authentic skills (e.g. critical thinking) which can be applied outside of their academic life.
  • Promotes metacognition and self-regulatory behavior: Guidance and feedback help students reflect on their thought processes, self-assess, and foster positive learning behaviors.

As an Ecampus course developer, you have a wide array of support services and experts available to you. Are you interested in learning more about rubric design, development, and implementation? Contact your Instructional Designer today to begin exploring best-fit options for your course. Stay tuned for Rubrics: Markers of Quality (Part 2) –Tips & Best Practices.

References:

  • Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. Alexandria, Va.: ASCD.
  • Richter, D., & Ehlers, Ulf-Daniel. (2013). Open Learning Cultures: A Guide to Quality, Evaluation, and Assessment for Future Learning. (1st ed.). Berlin, Heidelberg: Springer.
  • Stevens, D. D., & Levi, Antonia. (2013). Introduction to rubrics: an assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Sterling, Va.: Stylus.
  • Walvoord, B. E. F., & Anderson, Virginia Johnson. (2010). Effective grading: a tool for learning and assessment in college (Second edition.). San Francisco, CA: Jossey-Bass.