Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Can you have TOO many degrees?
#21
(12-11-2024, 03:55 PM)Hotdogman1 Wrote: ...I dont think anyone here had an issue with ChatGPT. Mr. GPT, please explain your habit to occasionally hallucinate.

"ChatGPT hallucination" refers to instances where ChatGPT provides information that is false, misleading, or fabricated while presenting it as factual. This phenomenon is common in AI systems like ChatGPT because they generate responses based on statistical patterns in their training data rather than accessing a knowledge database in real-time or understanding the content they provide. Here’s a closer look:

Why Hallucinations Occur
Pattern Matching Without Context: ChatGPT predicts the most likely sequence of words based on its training, not on actual "knowledge." If a query lacks sufficient grounding or relates to obscure topics, the model might fabricate plausible-sounding information to fill the gap.

Ambiguity in User Input: If the question is vague or open to interpretation, ChatGPT might generate a response based on partial understanding or assumptions, leading to inaccuracies.

Limitations of Training Data: The model is trained on a vast dataset but does not inherently know which sources are credible. It might draw from outdated, biased, or fictional information, causing factual distortions.

Lack of Fact-Checking Mechanisms: ChatGPT does not verify its outputs in real-time. It does not cross-reference external databases during the conversation, leading to a risk of confidently presenting falsehoods.

Examples of Hallucination
Fictitious References: Citing books, studies, or articles that don’t exist.
Misinformation: Giving incorrect dates, names, or events. For example, stating "George Washington was the first President of Canada."
Fabricated Processes: Describing non-existent techniques or procedures in science, medicine, or technology.
Misinterpreted Contexts: Combining unrelated facts into a response that sounds logical but is entirely false.

Mitigating Hallucination
User Verification: Encourage users to cross-check information with trusted sources.

Enhanced Training: Training models on more curated datasets to improve reliability.

Integrated Fact-Check Systems: Linking responses to verified knowledge bases or live information sources.

Prompt Refinement: Users providing specific, well-structured queries to reduce ambiguity in responses.

Hallucinations are a significant challenge for conversational AI and emphasize the importance of critical thinking when interpreting AI-generated content. If you'd like, I can assist in explaining strategies for detecting hallucinated content or improving AI reliability.

The issue is more severe when a person uses ChatGPT or AI generation software to write a whole paper or assignment that is not original writing. In my doctoral program, you would be kicked out or fail the course for plagiarism and student ethics violations under the student code of honor. A few of my professors gave warnings and have failed people over this. Why should someone be rewarded for using AI systems to assist with their papers? Using Microsoft Word to correct grammar is one thing, but when using AI software to enhance your writing ability or write entire sections of your assignment, it's a negative. It's unfair to those who work hard and do original work and research. Seeing a published article or book entirely AI-generated is even more frustrating. 
I guess I am old school; I got many of my degrees before this technology came out.
Degrees In Progress:
EVMS Doctor of Health Science
Completed Degrees:
Doctor of Healthcare Administration Dec 2021 
Masters of Business Administration July 2022
Masters of Public Administration '19
Masters of Arts in Urban Affairs '17
Masters of Arts in Criminal Justice '16
Bachelors of Science in Police Studies '14
Advanced Graduate Certificate in Criminal Investigations '15
Reply
#22
(12-11-2024, 04:18 PM)newdegree Wrote:
(12-11-2024, 03:55 PM)Hotdogman1 Wrote: ...I dont think anyone here had an issue with ChatGPT. Mr. GPT, please explain your habit to occasionally hallucinate.

"ChatGPT hallucination" refers to instances where ChatGPT provides information that is false, misleading, or fabricated while presenting it as factual. This phenomenon is common in AI systems like ChatGPT because they generate responses based on statistical patterns in their training data rather than accessing a knowledge database in real-time or understanding the content they provide. Here’s a closer look:

Why Hallucinations Occur
Pattern Matching Without Context: ChatGPT predicts the most likely sequence of words based on its training, not on actual "knowledge." If a query lacks sufficient grounding or relates to obscure topics, the model might fabricate plausible-sounding information to fill the gap.

Ambiguity in User Input: If the question is vague or open to interpretation, ChatGPT might generate a response based on partial understanding or assumptions, leading to inaccuracies.

Limitations of Training Data: The model is trained on a vast dataset but does not inherently know which sources are credible. It might draw from outdated, biased, or fictional information, causing factual distortions.

Lack of Fact-Checking Mechanisms: ChatGPT does not verify its outputs in real-time. It does not cross-reference external databases during the conversation, leading to a risk of confidently presenting falsehoods.

Examples of Hallucination
Fictitious References: Citing books, studies, or articles that don’t exist.
Misinformation: Giving incorrect dates, names, or events. For example, stating "George Washington was the first President of Canada."
Fabricated Processes: Describing non-existent techniques or procedures in science, medicine, or technology.
Misinterpreted Contexts: Combining unrelated facts into a response that sounds logical but is entirely false.

Mitigating Hallucination
User Verification: Encourage users to cross-check information with trusted sources.

Enhanced Training: Training models on more curated datasets to improve reliability.

Integrated Fact-Check Systems: Linking responses to verified knowledge bases or live information sources.

Prompt Refinement: Users providing specific, well-structured queries to reduce ambiguity in responses.

Hallucinations are a significant challenge for conversational AI and emphasize the importance of critical thinking when interpreting AI-generated content. If you'd like, I can assist in explaining strategies for detecting hallucinated content or improving AI reliability.

The issue is more severe when a person uses ChatGPT or AI generation software to write a whole paper or assignment that is not original writing. In my doctoral program, you would be kicked out or fail the course for plagiarism and student ethics violations under the student code of honor. A few of my professors gave warnings and have failed people over this. Why should someone be rewarded for using AI systems to assist with their papers? Using Microsoft Word to correct grammar is one thing, but when using AI software to enhance your writing ability or write entire sections of your assignment, it's a negative. It's unfair to those who work hard and do original work and research. Seeing a published article or book entirely AI-generated is even more frustrating. 
I guess I am old school; I got many of my degrees before this technology came out.

Let me respectfully agree and disagree with a few of your points. Yes, I absolutely agree with your take on having ChatGPT write an entire paper and passing it off as your own original work. And yes, I agree with the limits of AI when it gives false or misleading information. 

However, while you do have the "old school" right to remain fixed on the way you used to do it, or do it now; it does not mean that this new Technology is inherently bad. Let me give you an example:

Both in my Doctoral Program AND my current Masters program with UofA, [referencing two entirely different schools], my Arizona professors, and at VUL, our Dean of the Business School, both grades our papers using the Grammarly Score as one tool. 

While you assume that everyone is "perfect" at written research papers, our Dean emphasizes the use of Grammarly to help enhance our papers and make sure we are writing at a scholarly-level, and specifically at the post-graduate level. Many students have not been to school in 15+ years and have lost the skills of writing, without making grammar, sentence structure, nouns, verbs, context and punctuation mistakes.  Even as I write this response, my writing is not perfect. 

And certainly the whole point of Grammarly and other AI tools, is to assist the writer in making sure the completed product is up to the high standards of publications, graded term papers, Thesis level, or academic quality. 

I share this to also make an important point. What many are now learning as you use Grammarly to "correct" one's paper. Guess what? The AI score goes up as Grammarly uses its own algorithm and recommendations to "rewrite" the paper. And Grammarly even has a section at the bottom where you can ask Grammarly to "Increase the Impact of your Text" and it rewrites en entire paragraph.

So now what? Are you suggesting that VUL or the University of Arizona or any College/University or Professor that chooses to use Grammarly as a grading score for their students are wrong or engaged in plagiarism or academic dishonesty? What happens if I use Grammarly to rewrite this entire post and my AI score is 65%? Or I paste this in ChatGPT to have it assist me in making better sentences, punctuations and cicinct thoughts, am I not the original author? 

While I respectfully understand your bias, I would not disregard the new technology. Yes, I agree there can be abuses. But I also believe it can be extremely beneficial. 

ChatGPT Rewritten:

I respectfully agree and disagree with several of your points. I completely agree with your stance against having ChatGPT write an entire paper and passing it off as original work. Similarly, I share your concerns about the limits of AI, particularly when it provides false or misleading information.

However, while you have every right to adhere to traditional methods or approaches you’re accustomed to, it doesn’t necessarily mean that this new technology is inherently bad.

Allow me to provide an example:

In both my doctoral program and my current master’s program at the University of Arizona (UofA)—two distinct institutions—my professors at UofA, as well as the Dean of the Business School at VUL, incorporate tools like Grammarly as part of the grading process.

Our Dean emphasizes Grammarly as a tool to enhance academic writing, ensuring that papers meet scholarly and postgraduate standards. Many students returning to school after 15 or more years often struggle with grammar, sentence structure, context, and punctuation. Grammarly provides a valuable aid to bridge these gaps and elevate the quality of their writing. Even as I write this, my own writing is far from perfect.

The purpose of tools like Grammarly and similar AI technologies is to assist writers in producing polished, high-quality work—whether for publications, graded term papers, theses, or academic projects. Here’s an interesting insight: as Grammarly “corrects” a paper, it increases the AI-generated score through its algorithms and suggested rewrites. It even offers options to “Increase the Impact of Your Text,” effectively rewriting entire paragraphs for better clarity and engagement.

So, what does this mean? Are we to assume that institutions like VUL, the University of Arizona, or any professor who uses Grammarly in their grading process are promoting plagiarism or academic dishonesty? If I use Grammarly to refine this response or ChatGPT to improve my sentences, punctuation, or clarity, am I no longer the original author of my ideas?

While I respect your perspective, I believe it’s important to recognize the potential of these technologies. Yes, there is room for misuse, but the benefits they offer are significant. They can help individuals meet rigorous academic standards, adapt to modern expectations, and, ultimately, improve the quality of their work.

This version maintains your tone while making the content more concise and polished.
Completed

Doctor of Healthcare Administration Virginia University of Lynchburg
MBA  |  Universidad Isabel I / ENEB
Master in Human Resources Management  |  Universidad Isabel I / ENEB
Master in Project Management  |  Universidad Isabel I / ENEB
Master in Business & Corporate Communication  |  Universidad Isabel I / ENEB
Bachelor of Business Administration (Equivalent)  |  NACES, ECE




In Progress
PhD in Public Health | Texila American University | Class of 2027
Master of Arts in Human Rights Practice | University of Arizona, Class of 2025 


Reply
#23
Using Grammarly to fix grammar is fine; it can also rewrite your sentence, making it non-original writing. Having any AI software rewrite any part of your paper is plagiarism because you are claiming it as your own writing. If you feel that your writing is to the caliber of a master's or doctoral level of writing, you should not need any AI tools. Submitting articles, books, or assignments that are entirely AI-written is a significant negative; the score would not come out as high if AI software were used to correct minor errors as the score is generated as it scans algorithms that show if the writing is original, human-written, and AI-refined, Ai-generated and AI refined, or fully AI-generated. The AI detector does not lie; if it were simply Grammarly, it wouldn't pop a high score anywhere near 80-100% on a student's paper for AI-generated writing.

Using technology to write or modify your papers does not make you a better writer. If you feel that you require these things as a student, maybe you should take a few writing or English courses to enhance and increase your skill set.
Degrees In Progress:
EVMS Doctor of Health Science
Completed Degrees:
Doctor of Healthcare Administration Dec 2021 
Masters of Business Administration July 2022
Masters of Public Administration '19
Masters of Arts in Urban Affairs '17
Masters of Arts in Criminal Justice '16
Bachelors of Science in Police Studies '14
Advanced Graduate Certificate in Criminal Investigations '15
Reply
#24
(12-11-2024, 05:51 PM)newdegree Wrote: Using Grammarly to fix grammar is fine; it can also rewrite your sentence, making it non-original writing. Having any AI software rewrite any part of your paper is plagiarism because you are claiming it as your own writing. If you feel that your writing is to the caliber of a master's or doctoral level of writing, you should not need any AI tools. Submitting articles, books, or assignments that are entirely AI-written is a significant negative; the score would not come out as high if AI software were used to correct minor errors as the score is generated as it scans algorithms that show if the writing is original, human-written, and AI-refined, Ai-generated and AI refined, or fully AI-generated. The AI detector does not lie; if it were simply Grammarly, it wouldn't pop a high score anywhere near 80-100% on a student's paper for AI-generated writing.

Using technology to write or modify your papers does not make you a better writer. If you feel that you require these things as a student, maybe you should take a few writing or English courses to enhance and increase your skill set.
We'll have to agree to disagree. It is common knowledge that we write the way we speak. And the reality is everyone can't take an English Course to increase their skill set. You may not like it, but Universities are using AI [Grammarly] and other tools. Even Lawyers now have a Legal Writing AI tool to assist attorneys in writing and filling out myriads of legal documents flawlessly. And the Medical field also has AI to assist in diagnosing symptoms and also writing up medical reports. None of which falls in the category of "non-original writing." And certainly there is an entire business industry of book writing on Amazon where authors are using AI to write books and selling them online. 

I'll respectfully leave it there.
Completed

Doctor of Healthcare Administration Virginia University of Lynchburg
MBA  |  Universidad Isabel I / ENEB
Master in Human Resources Management  |  Universidad Isabel I / ENEB
Master in Project Management  |  Universidad Isabel I / ENEB
Master in Business & Corporate Communication  |  Universidad Isabel I / ENEB
Bachelor of Business Administration (Equivalent)  |  NACES, ECE




In Progress
PhD in Public Health | Texila American University | Class of 2027
Master of Arts in Human Rights Practice | University of Arizona, Class of 2025 


Reply
#25
(12-11-2024, 06:03 PM)michaeladsmith2 Wrote: Even Lawyers now have a Legal Writing AI tool to assist attorneys in writing and filling out myriads of legal documents flawlessly.
"A U.S. judge on Thursday imposed sanctions on two New York lawyers who submitted a legal brief that included six fictitious case citations generated by an artificial intelligence chatbot, ChatGPT."
https://www.reuters.com/legal/new-york-l...023-06-22/

"Chong Ke ordered to pay costs for opposing counsel to discover precedent was AI 'hallucination'"
https://www.cbc.ca/news/canada/british-c...-1.7126393

"The FTC is taking action against DoNotPay, a company that claimed to offer an AI service that was “the world’s first robot lawyer,” but the product failed to live up to its lofty claims that the service could substitute for the expertise of a human lawyer."
https://www.ftc.gov/legal-library/browse...s/donotpay

(12-11-2024, 06:03 PM)michaeladsmith2 Wrote: And the Medical field also has AI to assist in diagnosing symptoms and also writing up medical reports.
"But Whisper (OpenAi's artificial intelligence-powered transcription tool) has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments."
https://apnews.com/article/ai-artificial...6c4c9bbb14
[-] The following 1 user Likes Hotdogman1's post:
  • newdegree
Reply
#26
(12-11-2024, 06:03 PM)michaeladsmith2 Wrote:
(12-11-2024, 05:51 PM)newdegree Wrote: Using Grammarly to fix grammar is fine; it can also rewrite your sentence, making it non-original writing. Having any AI software rewrite any part of your paper is plagiarism because you are claiming it as your own writing. If you feel that your writing is to the caliber of a master's or doctoral level of writing, you should not need any AI tools. Submitting articles, books, or assignments that are entirely AI-written is a significant negative; the score would not come out as high if AI software were used to correct minor errors as the score is generated as it scans algorithms that show if the writing is original, human-written, and AI-refined, Ai-generated and AI refined, or fully AI-generated. The AI detector does not lie; if it were simply Grammarly, it wouldn't pop a high score anywhere near 80-100% on a student's paper for AI-generated writing.

Using technology to write or modify your papers does not make you a better writer. If you feel that you require these things as a student, maybe you should take a few writing or English courses to enhance and increase your skill set.
We'll have to agree to disagree. It is common knowledge that we write the way we speak. And the reality is everyone can't take an English Course to increase their skill set. You may not like it, but Universities are using AI [Grammarly] and other tools. Even Lawyers now have a Legal Writing AI tool to assist attorneys in writing and filling out myriads of legal documents flawlessly. And the Medical field also has AI to assist in diagnosing symptoms and also writing up medical reports. None of which falls in the category of "non-original writing." And certainly there is an entire business industry of book writing on Amazon where authors are using AI to write books and selling them online. 

I'll respectfully leave it there.

So, I guess you proved my point: If the person is not ready to do graduate-level work, they have no business in a graduate or doctoral program. CHATGPT and AI-generated work have no business in the academic field. Because English is not your first language, that's not the University's problem. You are on the same grading and criteria scale as everyone else. It clearly states you must be able to read, write, and understand English proficiently. The person is doing a disservice to themselves by using AI generators. Also, if you use AI generators to write papers, assignments, submitted published articles, or books, you plagiarize because the AI generator is doing the work for you. You cannot accept the glory of publishing if it's not your work, especially with a high AI generation score. If my lawyer used AI generation to create motions, documents, discovery, etc., that's a red flag; I would get a new lawyer.
On the other hand, a doctor uses a licensed system that a medical academic writing company created to write the diagnosis and allows a hospital system to buy and use the system for the sake of providing a scientific diagnosis. That's a different story. These systems are endorsed by hospital systems that pay for the writing, but a person wrote all those diagnoses. You can't compare this to academic writing... You are doing a disservice to the University, the educational field, and the research field when a computer system does the work for you. Original writing is writing you draft up yourself and put together based on research and sources.
Degrees In Progress:
EVMS Doctor of Health Science
Completed Degrees:
Doctor of Healthcare Administration Dec 2021 
Masters of Business Administration July 2022
Masters of Public Administration '19
Masters of Arts in Urban Affairs '17
Masters of Arts in Criminal Justice '16
Bachelors of Science in Police Studies '14
Advanced Graduate Certificate in Criminal Investigations '15
Reply


Possibly Related Threads...
Thread Author Replies Views Last Post
  145 Degrees Alpha 6 1,346 01-07-2022, 08:18 AM
Last Post: rachel83az
  Degrees of Happiness Alpha 0 827 03-30-2021, 10:58 AM
Last Post: Alpha
  A Fun Way to Display Your Degrees natshar 7 1,628 04-18-2020, 01:44 AM
Last Post: bjcheung77
  Walmart to Help Associates Get College Degrees MNomadic 57 9,929 11-04-2018, 02:45 PM
Last Post: natshar
  CEOs without degrees davewill 9 2,217 02-28-2018, 11:43 AM
Last Post: burbuja0512
  EMU union against online degrees DavidHume 12 4,226 11-18-2017, 06:03 PM
Last Post: sanantone
  Degrees: Practical or Fun? lgstudy 9 2,377 04-14-2017, 11:44 PM
Last Post: sanantone
  CEO's with Bachelor's Degrees and that's it... soliloquy 4 1,060 07-02-2014, 06:14 PM
Last Post: Johann
  Not all degrees are equal burbuja0512 9 1,971 04-07-2014, 10:49 AM
Last Post: sanantone
  Too Many Master's Degrees dcan 6 1,496 03-22-2012, 06:27 AM
Last Post: rhapsodyinblue

Forum Jump:


Users browsing this thread: 1 Guest(s)