Virtuosity 11.11

Where words become worlds…

Archive for the category “Education and Program Evaluation Topics”

PowerPoint Week

The infamous “Wall of Text” power point slides are over. This week, I’ll be talking a bit about how to make a powerpoint presentation, and offer some advice on how to get your story and your slides up to speed.

In the meantime, I start this topic off with some expert advice from Dr. James Hayton’s PhD Advice Website. He provides some very short and sweet pointers to keep in mind when making your slides in How to design outstanding power point slides.

2000px-microsoft_powerpoint_2013_logo-svg

Advertisements

Transcription Tips Tutorial

Transcriptions are an integral part of the research because they provide a written record of the audio from your interviews and focus groups.  However, without the right tools, they can be VERY time-consuming.  The two biggest tips I can give you are: make the cleanest recording you can and use a transcription program to help you. Here’s how:

Find a quiet space, and have multiple recorders going.  Don’t record in a coffee shop or a place that echoes.  Any place with a lot of background noise will give you the transcription from Hell because it’ll be hard to hear what your interviewee is saying.  For a typical interview or focus group session, I usually have at least two recorders going at the same time.  I’ll use my iPhone, my computer, a recorder, and when possible, a microphone.  For focus groups, I’ll put the recorders in different parts of the room.  I prefer to video record when possible as well so that I can see facial expression and body language.  However, to be able to visually record, you need to check with your interviewees and your RSRB to make sure you have permission.  Even with audio, please ask your interviewee before you record.

CLAP before you record your metadata.  If you are recording video, do this in front of the camera.  This will cause a spike in your audio files, and it will make syncing all your files together MUCH easier.

ALWAYS record metadata.  You can start with something like this:
Today is (date), we are doing a focus group interview at (location), it is (time), and with me are: (ask each person to say their name clearly, and give a brief intro that will help you identify their voice and name on the recorder)

Take fieldnotes when you can.  Although this will depend on the nature of your interview.  If I’m doing focus groups, I will have my computer up, typing notes as people respond to interview questions.  This is because when I type my notes, people actually pay less attention to me, and more attention to the others in the room – which is what I want.

However, if I’m interviewing one-on-one, it will depend on who I’m interviewing.  Sometimes, having a computer or notebook up may make the interviewee uncomfortable, and you won’t get spontaneous responses.  It will really depend on the situation.  If you’re in a situation where you can’t take notes during the interview, then make sure you jot things down as soon as you can – so that your memories are fresh.

When the recordings are finished, sync all the files using an audio editor program, such as Garageband or Camtasia.  Remember to line up your “clap spikes,” so that all your audio is synced.  These editor programs are REALLY useful for taking out background noise, too!

Import your edited file into a transcription program.  I swear by Inqscribe, which I love because everything is in one program, you can speed or slow the recording, control the start and stop with the tab button (instead of a foot pedal), and you can tag your file with timestamps (see below).  ALWAYS tag your file with timestamps.

Make time: It will take about an hour to transcribe 15 minutes of audio (from a clean recording).  I transcribe in blocks of time – because you will burn out after a few hours!

Tag your file with timestamps.
Save time on the first pass through: Depending on the purpose of your transcription, sometimes you can just paraphrase and timestamp, while relying on field notes.  Timestamps will allow you to go back into your file and quickly get to where you need.  I timestamp periodicially – especially before important things have been said.  Also, if something is inaudible, just type “inaudible” in your transcription to save time and move on.

Later, if you are doing discourse analysis, you can go back slowly over everything to include the transcription notation (transcriptions may take several passes – depending on how you will analyze these data).

Add dates to your file names
Label your files with the interview date (ie. 19Sep16 – Gidget Interview).  Also, if you can (some places allow for this), a description of the interview – ie. who was interviewed, where, and what it was about.  Keep these data files in a place that is secure.  Personally, I do not use Google for confidential data.  Instead, I use a Box account through my university which insures privacy and security.

Happy Transcribing!

cbs-listening-post-transcription-1941

 

DIY Whiteboard Tutorial

I learned this little trick from my son’s 3rd grade teacher.  With just a pack of sheet protectors and paper, you can make an entire class set of “whiteboards” to use in the classroom or for your own personal use when studying (I’ll talk about that part next week).

You only need three things: paper, sheet protector and whiteboard markers.

IMG_2542 (1)

Really.  That’s it.  Put the paper (and this can be blank paper, lined, graph, etc) into the sheet protector.  Write on top of it with a whiteboard marker, and ta daaa!  Done.

  • To make the sheet protector last longer, I’d suggest using an old sock or rag to wipe as opposed to a piece of tissue.
  • Put a piece of hard cardboard in the back, and you can write on the surface anywhere.
  • Get a metal clipboard, clip the page protector onto it and you have a portable magnet board, too!

Do you have other variations?  Thoughts?  Ideas?  I’d love to hear from you!

Next Tuesday, I’ll show you how to use flash cards in conjunction with your white board to help you actively study.

 

 

Tuesday Tutorial – Evernote for Lectures, Snapshots, and Scans

Hi Everyone!

I have been doing small talks at the Non-Profit Commons in Second Life, where I have had the pleasure of connecting with a lot of passionate people.  This Friday, I’ll be interviewed by blog radio host, Marie, on her Talk! with Marie show about my work.  In the spirit of this upcoming show, I wanted to highlight an excellent suggestion that she gave me – that people would be interested in tutorials that show what I use in my grad school work!

Evernote is one of my first tools, because it is not only an electronic journal that keeps everything for me, but the paid subscription means that it makes everything SEARCHABLE!  Even my hand-written notes!

Below, I present a quick tutorial on how to get those  hand-written notes, and the notes instructors put on the board, into your Evernote in a fast and easy way.  Look on, and prepare to be amazed!  (Well, at least I was, when I first tried this trick!)

Using my nifty Camtasia program, if you view this on Youtube, you can actually click to the marked chapters – which might make viewing more pleasurable to you, if you have a short amount of time.

Enjoy, and let me know if there are other sorts of tutorials that you might be interested in seeing!

Writing Rubrics

As I mentioned in last week’s post on rubrics, a rubric is an assessment tool that helps score and outline performance expectations.  Using a rubric helps by:

  • verbalizing expectations for performance
  • standardizing these expectations
  • providing benchmarks for assessment
  • opening conversations about expectations and desired outcomes

An effective rubric needs to provide an accurate assessment of what it is that you want to measure.  So, in addition to using a template, I wanted to discuss some things to consider when you create or modify a rubric.  To start, I want to build on a previous blog post by Phil Gaiser about rubrics.  From his site, is an illustration of a typical rubric:

As you can see, a rubric is broken down into four parts:

  • task description – specifies what is being evaluated
  • dimensions – these list the standards, criteria, or components that you will be evaluating
  • scale – these rank from highest score to lowest.  They can be both numeric values (in terms of point values), or descriptors (excellent, good, average, below average, poor)
  • descriptions of dimensions – these explicitly detail the standards for performance

To create a rubric, consider the following questions:

  1. What are you going to assess? (Task)
  2. What are the characteristics of what you are going to assess? (Dimensions)
  3. What do the characteristics of the highest scoring standards look like? (scale and descriptions)
  4. What do the characteristics of the lowest scoring standards look like? (scale and descriptions)

For example, let’s think about an instructor assessment.  One of the dimensions that I assessed was attendance.  For descriptions, I would use the following:

  • 4 = 100% Attendance, instructor arrived before the start of class each and every time
  • 3 = 100% Attendance, instructor arrived before or at the start of class.
  • 2 = 95% Attendance, instructor arrived at the start of class
  • 1 = Less than 95% attendance and/or instructor was late to class on at least one occasion

Now, you will note that the descriptions reflect a very high standard for instructor attendance.  This was because as soon as an instructor was late to one class, they would receive a written warning – attendance was very important for me, because when an instructor showed up late, it was also a bad example to students.  However, if the rubric had not been given and discussed ahead of time, it could be quite possible that an instructor would think that a 90% attendance would have been acceptable, even though, in doing so, they would score very poorly based on the rubric.

This is another reason why giving rubrics to employees ahead of time can be very useful, because it can prevent misunderstandings.  Here are some more tips to help you:

  • Start with a template to give you an idea of the dimensions that you may want to use.  Draw from several examples to get a feel for what you want to assess.
  • Scale accordingly.  If attendance is not as important as organization, then make sure that organization is given more points than attendance.
  • Make sure your descriptions are measurable:
    • Quantitative descriptions (ie. attended 3 out of 4 meetings) are easier to measure than qualitative (ie. positive attitude) measurements.
    • Details matter
  • Adjust when needed, ask for others to help you.  Especially when trying a rubric for the first time, I find that it may take a few tweaks to improve how things are being measured.
  • It is important to train people accordingly!  Even though two supervisors may use the same rubric, how that rubric is interpreted may be different.  Hence, it is important to “calibrate” yourself to the rubric – and to make sure you are evaluating consistently.  Discussions on what you expect to see can often clarify any ambiguities.
  • Include those that are being evaluated in the developmental process, so that both you and your students or employees are on the same page when it comes to assessment.

For more information, you can go to:

How to rubrics – This document provides a detailed list of questions that can help guide you in the developmental process

Stevens, D. D., & Levi, A. J. (2011). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing, LLC.

 

Hey, you! Teacher!

hooks quote

Everyone is a teacher.  However, our society has led us to believe that only specific subjects may be taught, and these subjects must be taught in a specific place, by specific people.  Yet, most of the skills we learn about being who we are, being better people, and being good at our jobs are not taught in a classroom.  Yes, formalized education gets our foot in the door for certain things, but our life mentors and teachers, like family, friends, and even strangers, teach us the most about living.

The current research on education challenges this notion that learning is a formalized activity, and that what gets learned should be an amalgamation of decontextualized facts and procedures that for many of us, lack any depth or relevance to our daily living.  In fact, this common notion of learning, and being a “good student” gets challenged in reform-based teaching every day!

For example, in online education, most instructional designers and educators build upon the theory of social constructivism (Mayes & De Freitas, 2004).  This theory says that we construct what we know and understand by actively engaging with others and with our environment (Phillips, 1995).  Now, if you think about how you, personally, have learned things, I bet this makes sense.  How did you learn how to cook?  How to ride a bike?  Or how to pick up a hobby that you love?  More than likely, these things were not learned through a lecture in a classroom.  It may start there, but for most experiences, we learn through doing, talking, trying and failing and trying again.  …and that’s natural!

When we think of learning in this way, then anyone we interact with who has taught us something (and this something could even be a new viewpoint or idea) has become our teacher.  Conversely, when you share your knowledge with others, you are a teacher.

So what are you waiting for?  Go out and teach!

Using rubrics to evaluate students and employees

1This post is in response to a discussion last week at the Second Life Tech Soup Friday meeting.  The speaker, Gentle Heron, talked about employee performance reviews.  She gave tips about how to make them more pleasant; such as providing calendar dates, expectations ahead of time, and opening employee discussions.  I added a comment about how rubrics can help with these assessments…which lead to a suggestion that I do a presentation on rubrics this Friday!

A rubric is an assessment tool that helps score and outline performance expectations.

It got me thinking, ‘I bet most people haven’t been taught to use rubrics effectively!’  When people have an opportunity to rate themselves, and when they know that this rating counts, they take more responsibility to develop an awareness for what they do.  As supervisors and teachers, it is only fair to provide these expectations ahead of time. There is a very different feel when one is being judged, versus when one judges themselves.  When people are given the opportunity to critically self-assess, they become more aware of their job, and reflective and critical of their own progress.  A rubric can not only be an assessment too, but it can be an extremely effective teaching and training tool, too!

When I taught biotechnology to high school students, their grade was based on both my assessment, as well as their assessments.  We both filled out the employee rubric separately, then met together to discuss the scores.  Their final grade was an average between my assessment and theirs.

Here’s some tips:

  • Start with a very clear rubric that outlines all the expectations for employee/student performance.  Sometimes, you may not know what all these expectations are, initially.  If you don’t, this is a GREAT opportunity to work with your employee or student to develop the rubric together.
  • Give this rubric to people ahead of time.  Talk about what the assessment looks like.  For example, ask, “What does a score of 5 look like in terms of attendance?  What about a score of 3?  …and 0?”  Begin these conversations now, so that there are no surprises.  That way, the rubric categories set a standard, and will not be taken as a personal affront.
  • Evaluate mostly formatively and occasionally summatively.  
    • Formative assessments mean that you and the employee/student look at performance periodically, and reflect/revise as you go along.  Think of a chef when they’re cooking an elaborate stew.  When they are constantly tasting the soup, adding spices here and there, adjusting things during the process, they are doing formative assessments.
    • Summative assessments are at the end.  Personally, I do not think that these are as useful, but yet the world (starting with education) has somehow used them as standard.  Summative assessment is like the final taste test for the food contest.  The food is all cooked, finished, and there’s no going back.  This is really tough on an employee/student – and if you think about it, where is the opportunity to learn?
  • Provide spaces between evaluations to reflect, revise, and adjust not only employee/student performance, but the rubric, itself.
    • As a supervisor or teacher, be a “guide on the side,” and mentor – encourage, ask questions, and push, but don’t dictate.  From my experience, people “own” their work and their self assessments when they not only understand what is expected of them, but also that their voices, their input counts.
      • Start by asking the employee/student to justify their scores.  This will give you a good idea of whether their interpretations of the rubrics match your expectations.
    • Assessment and evaluation should be a conversation with clear expectations and understanding.

Here’s a list of places to go for rubric designs and example templates.

iRubric – This site provides starter templates for you to design employee rubrics.  I would start with a generic template, then pull out the job description (or your learning goals), and then customize from there.

Rubistar – This is a great rubric website for teachers.  Similar to iRubric, you can take a pre-made template and customize it to your needs.

Cooper’s Rubric Presentation – Talks more in detail about rubrics, and includes several different types of employee performance rubrics, as well as outlines the steps on how to develop one.

You can also Google “Employee performance rubrics,” “Student rubrics,” “Music rubrics,” etc.  to get a base template to begin.  Then, customize it to your needs.

Do you have questions about rubrics?  Comments?  Please ask away and I will try my best to answer!

Next up on rubrics:  Creating them.

What’s a Program Evaluator?

In addition to curriculum design and teacher mentoring, I am also a program evaluator.  It did not occur to me that most people don’t know what program evaluators do, until my mother asked me about it.  I floundered over my explanation at the time, since I was caught off guard.  However, after pondering for a bit, here’s a more elegant (and detailed) answer:

A program evaluator analyzes data collected from a program to see whether or not the program is effective in doing what it is supposed to do.  

The evaluator’s data collection and analysis is in the service of answering one very big overarching question:

program working

We then use more specific, detailed questions to outline the actual evaluation itself. These questions will depend on what the program stakeholders want:

  • What are the components of the program (activities, processes, people), and how do they work together?
  • What components are working well?  What components are not working well?  Why?
  • What is the impact that the program is having on the stakeholders that are involved?
  • Is the program fulfilling its mission statement?  Why, or why not?
    • What evidence can we collect to say that a program is or is not working?
  • How can the program be improved?

Of course, it can be more complicated than this, and there are many different program evaluators that are out there; each with their own different styles.

I, personally, take on the role of a critical friend, rather than judge and jury – so in most of the programs that I have evaluated, my analysis and reports are about helping people improve their programs so that they can better serve everyone involved.

It’s a rather gratifying experience, since my job has allowed me to work very closely with people as we plan out the type of data I will collect, what information I can provide to the program in terms of understanding how their programs work, suggestions for how they can best accomplish their goals, and how they can improve.  My data analysis has also been used in publications!

Although program evaluation can be an extremely rewarding experience, you do need quite a bit of training.  Evaluators should have working background knowledge of the programs that they evaluate (for example, I have an emphasis on STEM education), and they also have special training in evaluation, social research and data analysis.  I would also add that they should be a people person – interviewing skills are important for the job!

For more info about program evaluation, start here.

Do you have questions about program evaluation?  Similar to education, I can talk on and on about it!  What else would you like to know, hmmm?

Online Education: Beyond the technology – Part 1*

And now a post that is very near and dear to my heart – online education!  I realize that Virtuosity touches upon a variety of topics, but at the core, my passion is teaching teaching.  Why?  Because if you can teach people how to teach, you’ve just empowered the world.

Anyways, when educators discover that I design and consult about online curricula, they start to ask me a lot of questions, or they tell me about their experiences in online education.  One of the most common situations that I’ve run into (especially with virtual educators) is: I have my students taking my online course, but they’re not (self-directed, motivated, performing, getting higher grades, participating, interacting etc…) as much as I thought they would in this environment.  People will also come to me and say, “Oh, I tried teaching online, and it didn’t work.”

Okay, let’s tackle the big fallacy underlying these two very common situations:

There is an assumption that technology or online (anything) is the silver bullet to teaching.

With the introduction of the Internet, along with lowered costs for personal computers in the 1990’s (Reiser, 2001; Harasim, 2000; Kapp & O’Driscoll, 2010), educators began jumping on the bandwagon – touting that the Internet’s capability to connect people to a world of resources would revolutionize education (Reiser, 2001).  Even now, we can still easily find articles on game theory, gamification, MOOCs and other emergent technologies that claim that a new educational paradigm is upon us – a transformative force that will move our society into a knowledge-based, information age.

This is all hype!  The Gartner Hype Cycle is wonderful model to explain the typical trends that emergent technologies make over time in our society.

As you can see from the figure, when technologies enter the scene, they do it because of “inflated expectations” (Gartner, 2003, p. 5) of what a technology may do (whether it is the promise of enlightenment, educating a world of people, or the instantaneous motivation of a classroom full of freshmen) – everyone jumps in, thinking that this will save the world.  Afterward, when all the pixels have settled, and we compare digital education to face-to-face traditional methods, we find that on average, most studies show no difference in student performance (Bernard, Abrami, Lou, Borokhovski, Wade, Wozney, Wallet, Fiset, & Huang, 2004**).  In fact, once everyone has jumped onto the bandwagon, we begin to see a windfall of literature that talks about how the technology sadly fell short of expectation.  Educators end up in the ‘trough of disillusionment,” where not only is technology is not what it was cracked up to be, but also, may be even worse than what we had before!  Unfortunately, there have been many occasions where educators and students stop here – embittered that technology did not fulfill its (false) promises to reform.  We have seen this happen several times with online education, and can follow this curve in the literature.  I’ve read about the hype cycle regarding Second Life, and now on MOOCs, as well (2012 was supposed to be the “Year of the MOOC“).  I would argue that this is actually the reason why many reform measures fail to work – we don’t hold out for the “slope of enlightenment” phase, where we actually learn when and how to use technology appropriately!

Just how do we arrive at the slope of enlightenment, you might ask?  (Well, maybe you didn’t ask, but I shall ask it for you).  First, we must reevaluate our assumptions about technology in education, and reconsider:

  1. You can lead a student to the Internet, but they still may not learn.
    • We must debunk our assumption of the “Net Generation” – that young people (I cringe at this term) are simply born with a keyboard in their hands and that they will not learn any other way.
    • The other assumption to this, is that the Internet is self-teaching.  This is largely due to Sugata Mitra’s infamous “hole in the wall” study*** – a study that has been critically examined, and unfortunately, myth busted.
  2. “Insanity is doing the same thing over and over again, and expecting different results.” ~Albert Einstein
    • To clarify what I mean by this – I refer to several times when I have been invited to classes in Second Life, only to sit in a virtual chair, with virtual classmates, looking at a virtual board – with Power Point slides.  To use technology to do the same things we’ve always done sort of defeats the purpose.  Studies have shown that when instructors have adopted technologies in their classrooms in this way – to use technological superficially as a “new way” to do “old things,” it doesn’t stick or become incorporated effectively within their programs (Lankshear & Bigum, 1999).
  3. Online teaching requires online pedagogies that are very different from face-to-face teaching.
    • Teaching online using exactly the same pedagogies as face-to-face teaching is like using Power Point to teach a yoga class.  It doesn’t work.  Instead, we must develop, understand and use appropriate methodologies that are aligned with our teaching philosophy and learning goals.
    • Good teaching is good teaching, regardless of where or how you do it.  Good instructors adjust the “how” part, depending on “where” they are and what they are teaching!

So, I mentioned three assumptions.  Although they may seem small, each one is like opening a box of Sees Candies chocolates – full of wonderful surprises.  In future posts, I intend to unpack some of these assumptions for you!  In particular, I will address point #3, because this last one is, what I would claim, the Achilles Heel of the online teaching world.  When online teaching is done incorrectly, the results can turn both teachers and students away from it altogether.  One of my hopes as a curriculum consultant is that I can prevent that from happening!

Stay tuned fellow educators, grad students and intellectuals – I have a lot more to share on this subject.  However, if you have burning question for me about it, or a specific topic you’d like me to blog about, please leave it in comments, or contact me.  I’d be more than happy to help you!

 

Notes:

* I am a huge believer in accessibility to all audiences.  Therefore, I have tried my best to link you to credible open sourced documents.  This has resorted in a simplification to my references – open sourced links will not be listed in the reference section, since you can click on the embedded links in the blog.  However, if you are eager to learn more, or would like further citations, please let me know.  One advantage to being in the middle of my comprehensive exams is that my brain has been thoroughly marinating in this topic for several months!

**There are many other studies I can cite that show this.  However, Bernard et al.’s meta-analysis is one of the most comprehensive reviews on the literature regarding online  versus face-to-face instruction.  Let me know in comments, however, if there’s others that you’d like to see!

***Check out Mitra’s talk, here: 

References:

Bernard, R.M., Abrami, P.C., Lou, Y., Borokhovski, E., Wade, A., Wozney, L., Wallet, P.A., Fiset, M., & Huang, B. (2004). How does distance education compare to classroom instruction? Review of Educational Research.

Harasim, L. (2000). Shift happens: Online education as a new paradigm in learning. The Internet and Higher Education, 3(1), 41-61.

Kapp, K.M., & O’Driscoll, T. (2010). Learning in 3D. San Francisco, CA: John Wiley and Sons.

Lankshear, C., & Bigum, C. (1999). Literacies and new technologies in school settings. Pedagogy, Culture & Society, 7(3), 445-465.

Reiser, R. A. (2001). A history of instructional design and technology: Part I: A history of instructional media. Educational Technology Research and Development, 49(1), 53-64.

Catching Butterflies

She closed her eyesMonarch-butterflies-pacific-grove

and opened her mind;

released her thoughts

and let them find

the fire in the butterflies.

Its seems at the start of every large writing project, I find myself at the edge of a cliff with a large net.  The sky is full of colorful butterflies – their wings dazzle and catch in the light while they fly erratically – teasing, nearing, then flitting unpredictably away, while luring me from my safe space, daring me to step off that safe cliff rock to fall into the dark abyss.  I am mesmerized by their brilliance, as wings collide and dance in front of me.  I only need to catch a few – but they must be specific kinds!  So I watch, and try to shake myself out of that overwhelming stupor – the confusion of having so many ideas and thoughts spiral around me.  I spend days there, focusing, concentrating on that single butterfly that I must catch.  Sometimes, I find them in my net, and am fooled – a stray petal or leaf, but not the butterfly I want.  At other times, I catch so many, and I must only pick a few.  Their brilliant colors confuse me, and before I can get out my collecting jar, they have all flown away.

Find me today, on the edge of the cliff.  I’m catching butterflies, again.  I hold my net and my jar, eyes locked onto the swirls of color… I just need a few.  Only a few.  Just a few to light my way.

Then, I will be ready to jump!

 

Post Navigation