104: Protecting Student Privacy in the Age of AI: 3 Questions Every Educator Must Ask

https://youtu.be/CX_rEX96LPk?feature=shared

While AI has the potential to revolutionize education, we must also be aware of the privacy and bias issues that can arise when we are sharing student information with AI systems. As leaders, we must ask, where is this data going? Who has access to it? What legalities do we need to observe, and what right to privacy do our learners have? 

I’ve developed three critical questions you can use to start this conversation with your governing board and your education teams regarding the use of AI. 

 Question #1 – Does your leadership have a policy for sharing learner information with AI? 

And if you do, is this policy easily available to teachers and instructors – do they know it even exists?

In most countries, there are laws in place that protect our privacy as individuals and especially protect younger learners when it comes to sharing personal information. Many teachers and school administrators do not realize that when we input information into any technology hosted by a third party, whether it be a Google Search Engine or Chat GBT, that data is collected and is now part of that engine’s greater knowledge set.  And let’s be clear: these engines are not a public library; this data is owned by a third-party for-profit corporation whose primary reason for existing is to make money. Although these new AI tools are almost initially and brilliantly “gifted to us,” there is always a cost. Whether it be a future subscription fee or the advancement of knowledge about us for commercial purposes, the goal is to have us either reaching into our wallets or relinquishing extraordinary levels of personal privacy into systems that are more sophisticated than anything we’ve ever seen or experienced in our lifetime.  

Question #2 – Do students and parents have the right to opt out of having their data collected and used by AI systems?

Not everyone feels the same way about the use of AI. While it can be of great assistance to help schools and educators create more personalized learning experiences for each student, it can also collect data on a student’s background, culture, language, learning style, and much much more right on to what their favorite breakfast is. And again, while this information can be incredibly useful in creating customized learning experiences, it also raises some huge concerns about who owns this data, who has access to it, and the various nefarious ways it can be used.

Question #3 – Do the AI technologies you are using or considering have established processes for mitigating possible bias generated by the system?

We are now seeing issues where AI systems programmed to analyze a student’s language, gender, background, and culture can make incorrect assumptions about that student’s abilities or interests based on existing stereotypes. As a result, many organizations are now calling for the use of “red teams,” or third-party audits where a group of external experts assess and evaluate any potential harmful capabilities in an AI.  So whether you’re a techie, non-techie, or just fed up with keeping up with new technologies, just remember you don’t need to understand how all of this works; you just need to ask tech companies the question on how they are mitigating bias and make sure they can produce an answer that is acceptable to you. 

So, hopefully, this video has got you thinking about what types of policies you need to put in place to protect your learner privacy at your organization.  

If you enjoyed this video, please click the Subscribe Now button at the top of my website to join my exclusive community of learning leaders and ensure you don’t miss out on future videos!  

#elearninggold #annettelevesque #education #distanceeducation #elearning #AIEducation 

Share

Related Posts: