Remember test taking? Or maybe you’re still taking them. Did you ever cheat on one or have friends who did? There were so many “tricks”. Of course, cheating never pays off in the long run, but that doesn’t stop students from trying.
Now that so much of education has abruptly moved online, the responsibility of catching cheaters has moved away from teachers to educational monitoring programs. In this Tech Tuesday, I’ll talk about these programs, whether they are in the best interest of students, and why and how CSUF students are protesting their use.
Educational Monitoring Programs
Examples of programs designed to help catch cheaters are Proctorio and Examsoft. Both programs use the camera in students’ laptops and A.I. to catch would-be cheaters. They do this by using “software that can lock down students’ computers, record their faces, and scan their rooms, all with the intention to thwart cheating” (NYTimes). The programs capture audio, video, and what tabs are open in a browser.
My close family member N.W, who works for the California education system, recently brought this sort of monitoring to my attention and we had a conversation about how these systems may or may not have the best interest of students in mind.
From a 30,000-foot view, an A.I. designed to detect cheating would need lots of footage of what “not-cheating” vs. “cheating” looks like. Then, once an accurate A.I.-model has been built, a company could promise some level of accuracy; I’m assuming there would have to be a published false-positive rate. In other words, a hopefully low number of times the A.I. says a student cheated (positive), and a hopefully high number that the student didn’t (false).
What could possibly go wrong with automating the process of detecting cheaters?
What Could go Wrong?
Simple: the software may not have been trained using a diverse set of students. What if the algorithm to detect cheating was trained on young, caucasian students with no visual or physical ailments? What happens when older or non-caucasian students use the system? Or students that exhibit uncommon body movements due to stress or an ongoing condition?
If these students weren’t included in the training set that built the model, then there’s no telling what the false-positive rate would be for those outside the training set. And there are indications that for some groups of students, the false-positive rate is much higher. For example students who frequently look around due to factors outside their conscious control. In other words, the software may not generalize well.
The consequences of a false accusation are astronomical! Imagine being falsely accused of cheating. It’s a truly awful experience. I remember serving on a board at my university where we handled cases of cheating. In one particular case, I remember how miserable the process was, including questioning, evidence gathering, meetings with parents and professors, and the crying and pleading of the accused student.
Now imagine such a scenario where an entire population of marginalized students have to go through a similar scenario because an automated system with no right (due to lack of training) placed them in the “guilty” bucket.
What are your thoughts on this?
As we trust computers more and more to make important decisions for us (like how to drive from A to B or even to drive the car for us), we assume great risk, and it’s not all that obvious from the outset how damaging it can be.
Don’t get me wrong, I believe that one day autonomous vehicles will help us save millions of lives, but we must be careful that the models we build, generalize well and serve all with equal importance. This also applies to educational monitoring programs.
Many thanks to N.W. for the tip and for caring as much as you do about your students. Hats off to you!
Europe is missing out on the AI revolution, can they catch up?
For decades, the European economy has been characterized by and celebrated for its industry, from manufacturing to construction and energy generation. Even today, industry accounts for 80% of Europe’s exports and private sector innovations. But when looking at this year’s Future 50, it turns out EU countries are missing out on the A.I. revolution. Is it too late to catch up? Read more here.
Tell me who your neighbors are and I’ll tell you who you are
Put simply, k-NN is a machine learning classification algorithm that helps you predict the category something belongs to, based on its similarity to other data points around it. This article explains this useful algorithm in simple terms with good visualizations.
What’s the best approach to data analytics?
In practicing data analytics for more than 30 years and leading, advising, interviewing, and teaching executives in many industries on data analytics, Tom O’Toole observed that their approaches generally fall into one of five scenarios: two that typically fail, two that sometimes work partially, and one that has emerged as best. Curious which one? Read more here.