03-08-2024, 10:39 AM
(03-03-2024, 09:28 PM)Hotdogman1 Wrote: It is to my understanding that UMPI is neutral on ChatGPT. From an email sent on August 23, 2023:
Dear Students,
As you know, ChatGPT and related artificial intelligence (AI) have prompted much discussion at systems and universities nationwide. As we learn more about AI's real or potential implications for student learning, three truths appear to be emerging.
First, engagement with AI, including ChatGPT, represents an opportunity to acquire new skills and new knowledge, as well as a chance to identify and share resources.
Second, this engagement must be accompanied by an affirmation that our standards of academic integrity have not changed. In some contexts, AI may be a tool to support you as you learn, but it cannot replace that learning or the other work you do to grow as a thinker and as a citizen. And it must not replace your own written or oral presentations and products.
Third, it is increasingly clear that the immediate development of a policy governing ChatGPT and related AI would be premature. We need to hear what faculty find encouraging in this arena, what they find concerning, and what they would like to know more about. This is also the case for our professional advisors, student success staff, academic leaders, and, of course, you.
In the new academic year, I encourage you to discuss ChatGPT and related AI with your faculty advisor, professional advisor, student success staff, and others. The presidents and I will be listening closely to what they think about this subject, and we will be listening closely to what you think as well.
Members of my staff will be talking with student representatives to the Board of Trustees about AI this fall, and they will also be seeking opportunities to speak with undergraduate and graduate student government groups. Feedback from those groups will be shared with the presidents and me throughout the year. Be assured that ChatGPT in particular will be an ongoing focus for us.
I wish you all a positive and productive start to the new academic year.
Regards,
Chancellor Malloy signature graphic
Dannel P. Malloy, Chancellor
University of Maine System
Harvard has a policy on AI similar to this. It basically says: "Use it for research, but don't go as far as cheating. You know what cheating is, and if you don't, refer to the handbook (which is online)."
Their policy also goes beyond the cheating stuff that people have been doing for centuries anyway; it says that if you are a researcher, be cautious about putting your private and personal research information into any generative AI tools, as doing so may compromise the data. Interesting stuff, in my opinion.
I enjoy when tools like generative AI are released to the public because it forces educators to test competency and critical thinking, not just rote memorization.
When I was in school a gazillion years ago, all my college-level exams had a sprinkle of multiple choice, lots of fill in the blank, and at least three essay questions; this goes for sciences such as AP, Gen bio, Ochem, etc. The exams were designed to force you to think critically and demonstrate competency (and yes, you were graded on your GRAMMAR and SPELLING in science classes). Now, kids are taking anatomy and physiology classes, and everything is multiple choice, which is mind-boggling to me. This goes for many of the providers like Sophia, SDC, and SL. I'll cut SL a little slack; out of all the ACE providers, their courses most resemble a legitimate college course.
I look forward to seeing how educators will adapt.