top of page

Braincells & ChatGPT: Are We Even "Smart" Anymore? (On Behalf of The Teachers)




Introduction: Two Scorned Scholars


Just yesterday, I was sitting in a coffee shop trying to punch through studying for yet another security certification (and yes it was dull), when I shamelessly started eavesdropping on a conversation between two high-school teachers. Usually I would never be so ill-mannered, but low and behold, the main topic of discussion was none other than AI and ChatGPT. While both women seemed genuinely polite and understanding of the circumstances of today -you know, with students being on their phones 24/7- I could tell there was some serious bitterness buried deep there. I sat and listened to these two teachers rant about what AI and ChatGPT are doing to their kids for 40 MINUTES before jumping and asking if they could list out their concerns for this article like a total weirdo. Thankfully, however, they were more than happy to provide their perspectives:


"My students regularly submit work using AI and the parents get upset with me because I refuse to grade their child's work based on ChatGPT's standards. But I'm not an idiot. I'm not using ChatGPT to grade work that ChatGPT originally did!"
"I try to use essays written by ChatGPT as an example for my students' writing, but I never intended for them to take that as an excuse to plagiarize. Now I plug every single essay into ChatGPT only to find out that there was nothing authentic about what my students wrote. Integrity is truly dead."
"I can't even finish giving an assignment before someone raises their hand to ask if they can use AI to do it. Don't get me wrong: I don't want to ban them from using AI because in some ways, it can improve their work. However, I'm afraid that kids are relying on ChatGPT so heavily that they're losing braincells."

Think of that last quote..."kids are relying on ChatGPT so heavily that they're losing braincells". A strong statement, no doubt, but these teachers' concerns aren't completely wrong.


Mamma Mia... Here We Go Again



Now you're probably shaking your head and lamenting:

"Dear Silicon, you awful, nay-saying, bitter, social media and all-things trendy killjoy: PLEASE don't ruin ChatGPT for us. If you do, you're just as bad as that University Professor who made me come to class on the Monday morning after the Super Bowl, and that SUCKED."

Well, don't worry, Gen-Z. I love you and respect you, and because of that, I'm not here to lecture or give you an exhausting tirade on the "evils of ChatGPT", like my previous work on Tik-Tok or dating apps. Actually, the last thing I want to do is scorch ChatGPT with the burning fire of a thousand suns! In fact, I can acknowledge the benefits that generative AI apps like ChatGPT have contributed to society, because there are a lot of them. Also, keep in mind that Silicon was once a college student not too long ago, and I definitely 100% dabbled in some AI here and there.


I remember one time having to churn out an essay on a legendary, transformative band from the 1970's for an undergrad music elective. Naturally, I went with ABBA, as one does (Mamma Mia has helped shape the fabric of modern girlhood). So when I typed in "write essay on ABBA, the band" into ChatGPT 2022 (3.5 version), this was the result:



Astounding. Even better than what I managed to cook up for my horrible college essay, and it's oh-so much more grammatically pleasing. When talking about transformative influence (such as Meryl Streep and Amanda Seyfried), ChatGPT has revolutionized ANYTHING and EVERYTHING, from writing college essays to planning your wedding, and it's getting even better by the second. Not only does ChatGPT save you the hours of time researching, outlining, polishing, re-writing, and peer-reviewing to get a basic essay done, but it also saves you the trouble of having to think. In fact, that's the beauty of why so many people are enchanted with OpenAI's beloved creation. You don't even have to think at all. That being said...I have been holding back on the topic for a while now, and maybe my run-in with the two scorned teachers is a sign that it's time to hash it out with ChatGPT. (My pending lawsuit from trashing Tinder isn't all that bad).


First Off, What Is Generative AI?



Generative AI is any kind of artificial intelligence algorithm that is able to take speech, text, images, or any other form of media, plug it into a deep learning model, and produce a cognitively impressive output. Generative AI depends on "deep learning"; a more complicated branch of Artificial Intelligence that focuses on solving real-world problems. Deep learning can include scenarios such as identifying objects or people from a single image, translating from one language to another, or -the focus of this article- creating intelligent digital assistants.


There's a lot of "vagueness" surrounding the basic way in which ChatGPT operates, but we managed to find the simplest explanation, thanks to Kevin Roose's New York Times article in March of 2023. Roose explains the origin of a Natural Language Processing (NLP) neural network model, which starts out as computer algorithm who's main purpose is to:

  1. Collect a lot of data (sources such as books, websites, blog posts, tweets, etc.)

  2. Train based on the collected data (someone asks ChatGPT a question, it pulls its answer from the data collection)

  3. Expand cognitive power (train ChatGPT to recognize patterns and overcome past confusions/roadblocks to intelligent answers)


So if you are anything like me, your number one question would be to immediately ask: "Well HOW exactly is ChatGPT collecting all this data and WHERE is it coming from?", to which the main answer is: The Internet. The thing about the Internet which is amazing is that we have so much access to information. It's all right there, just waiting to be grasped by our fingertips, and ChatGPT's main function is to glean this information by web scraping. Web scraping is the process of using automated tools such as OctoParse or ParseHub to gather information from websites and any other sources published online. Think of the Internet as one huge, ever-expanding database of material that ChatGPT regularly goes to collect from whenever you ask it a question.


But while the Internet is awesome in that sense, this poses two ginormous problems with generative AI: Number one being your information is not always accurate, and Number two, you're at an increased risk of plagiarism. And if you happen to have your roots strongly planted in academia, (whether that means you are a teacher, a student, or an AI scholar), this could result in dangerous consequences depending on how you use this technology. Hence the primary, over-arching question: Is ChatGPT damaging our brain cells?


Before We Hate...



There are so many interestingly positive things to note about generative AI. First, we have to acknowledge the progress made from 2010 to 2024, which is nothing short of remarkable! I personally remember when people were playing around with Amazon Alexa when it first came out in 2014 and all of the ridiculous things you could make her say or do. Almost any serious request would be met with "I'm sorry, repeat that?" and "I don't understand", in a frustratingly mocking tone that always sent people's temperatures boiling. And thinking back to our last example, if you even attempted to writing an essay with ChatGPT (after plugging in, "write essay about ABBA, the band" to ChatGPT early versions) you would probably be met with some confusion:


  • Abba means "father" in Arabic, or in some religious contexts, God the Father...

  • Rhyming pattern ABAB examples: "In visions of the dark night I have dreamed of joy departed—But a waking dream of life and light Hath left me broken-hearted."

  • The alphabet: A- Always be kind, B- Believe in yourself, C- Chase your dreams...


That last one was low-key condescending. But nowadays, Alexa is turning on your kitchen lights. She can unlock your car. She can even predict which celebrity couples will break up in the next month (her money is currently on Timothée Chalamet and Kylie Jenner). Secondly, we love the fact that people now have more time to take care of the tasks that actually matter, such as brainstorming and cultivating the creative components of a project instead of wasting time on tedious follow-through. Indeed, ChatGPT takes the doing out of thinking, which you can argue fosters more innovative and valuable content. Some of the basic things that Artificial Intelligence has improved upon in the past couple of years includes the following:


  1. Writing essays (duh)

  2. Composing emails

  3. Debugging code

  4. Solving math problems

  5. Summarize long-winded topics (like the ones I rant about)

  6. Language translation

  7. Write catchy songs

  8. Apply to jobs

  9. Create the perfect marketing slogan

  10. Peer Reviewing

  11. Concoct a bestseller novel idea


There is something for everyone in any industry, in any profession, in any circumstance.

So while Artificial Intelligence used to be silly and not very smart, it's getting smarter, faster, and cleaner-cut. And boy are people taking advantage of it.


But the question that these teachers, and I, are mainly concerned with is: Are we? Getting smarter, we mean. When we look back at all of these remarkable improvements (leaps and bounds) that ChatGPT and other Artificial Intelligence algorithms have made in the past couple of years, it's a little alarming from a human-potential standpoint. We aren't referring to the endless narrative that killer robots are going to take over planet earth; that's not only become cliché but it's also becoming boring. What we're talking about is our natural talent. Our natural charisma, ambition, and drive to sculpt beautiful intelligent human minds that due to the brilliance of ChatGPT and generative AI, some people aren't exercising enough. Again, for working professionals who need to meet tight deadlines, ignore our grumbling. We're rather referring to students and the academic industry as a whole with our frustrations.


"Is It Cheating If I Write My Essay Using ChatGPT?"



Yes. We don't want to say no, but it's totally a yes. And for three main reasons:


  1. You Aren't Using Your Brain Cells



It is waaaaaay too easy to operate ChatGPT nowadays. Scarily easy, in fact, because just thinking about the reality of "if I did this all the time, then I would have no use for my brain", makes me very nervous for the younger generations. You may think I'm exaggerating, as I often do, but is there not some ounce of truth to that statement? I'm not a neuroscientist by any means, but a stronger case should be made for the learning and growth that our brains are naturally designed to take on. Let's put this into mechanical terms, for example: If you have a fast-spinning wheel that supports a well-oiled and highly sophisticated machine, why would you block it with a wedge to slow down its pace? The main thing that I and these teachers are worried about is that while ChatGPT claims to be a tool to help support human beings, that's not what it's actually doing. Instead, a lot of students under the age of 25 (with not fully-developed prefrontal cortexes) are relying on it solely so that they don't have to exercise their wheels.


We can't tell you how many times I've witnessed students type a topic into the ChatGPT textbox, copy and paste whatever material it spits out -without even reading it- and say, "Oof! What a hard day's work! Now let's go get a Subway sandwich". I'm rarely ever stunned, but these antics make me want to slap my hand to my forehead in frustration.


At the frequency and rate in which ChatGPT is being heavily utilized, no one under the age of 20 is going to be able to write or think at an acceptable level for their ability. Parents and teachers are also part of the problem. According to a study conducted by fastcompany.com in 2023, about "61% of parents are fine with their child using ChatGPT to complete assignments and 58% of teachers are fine using it to grade". So basically, students are using ChatGPT to write their papers only for ChatGPT to grade those same papers according to ChatGPT's educational standards, which reaffirms why the coffee shop teachers were so upset. We really hope that this is not the case for all of academia, but we have a right to be concerned about how ChatGPT is impacting kids' cognitive stamina.


We've already experienced the tragedy that Tik-Tok inflicts on their attentions spans, mental health, and interpersonal skills at a young age. If educational communities continue to rely on generative AI to teach, how will our children's ability to read a book, write an essay, or come up with creative, authentic solutions compare when they're trying to accomplish their dreams? Which leads us to our next dilemma: Are we even learning anything at all?


2. As A Result Of Not Using Those Brain Cells, You Never Actually Learned Anything



Going back to our original concerns about plagiarism and the accuracy of ChatGPT's sources, while ChatGPT doesn't earn you an automatic full-blown lawsuit for plagiarism, sometimes, it get's a little too close. ChatGPT's technique of collecting and stitching bits of info from it's training data often results in a lot of rephrasing of original ideas. What those sources are or where they come from, who knows. But if you're going to use ChatGPT for citing, you may want to check it's output against a credible source before including it in your thesis. In the mere few years that ChatGPT has been around, there's already been numerous ethical debates and legal disputes on the "authenticity" of people's work.


In combination, these two factors don't necessarily help you learn anything, in a sense. That doesn't mean you can't rely upon ChatGPT to provide valuable material; if anything, ChatGPT is a great source for research and formulating your ideas in the right direction. All we're saying is that if you get any information from ChatGPT, the smart thing to do is to make sure that you are giving proper credit to a source that the algorithm churned out.

3. When Shoved Into A Real-World Situation Where You Need To Apply That Information, You're Going To Flounder



In the workforce or educational industry, it's easy to recognize when some people have it and some people don't. The value of your character and integrity shows if you are able to apply knowledge to solve a real-world issue or perform in a high-pressure situation. Because let's face it: The real-world is TOUGH. It's difficult to be expected to be "on" all the time, but that's just the way things are.


The people who are able to do it turn out looking like rock-stars (think Meryl Streep in Mamma Mia!) and the people who can't are the ones who look like slacking dum-dums (think Justin Bieber flubbing the Spanish opening to "Despacito" on stage and singing "blah, blah, blah" instead). When it comes down to it, if you pretend like you authentically came up with an idea that ChatGPT mindlessly regurgitated, you are going to sink like a rock. The people that actually put in the effort to absorb information are the ones who naturally end up mastering the material, and therefore, they are better equipped to call upon that information in an impressive way. So here's the lesson: Actually put in the effort to learn your stuff, or you're going to drown.


Security Concerns


Furthermore, Artificial Intelligence still has a long way to go in terms of being security-stable. While most developers nowadays at least have the ambition to build applications with security from the ground up, OpenAI has already had multiple instances of PII (Personal Identifiable Information) being compromised. Since it's introduction to the public, there have been at least three major data leaks (including training data, user credentials, etc.) that are directly traced to ChatGPT, which makes us think that this app doesn't have the most robust mobile security.


Among some of the most critical ChatGPT vulnerabilities to date includes instances of cross-site scripting (XSS) and unauthorized access to the app's config.json file (the main configuration file) , pictured below:



There's also been a lot of gray-area debate and sometimes even lawsuits on OpenAI's legitimacy to access sources for ChatGPT's training data. One of those incidents occurred just last February, in which OpenAI accused The New York Times of paying a for-hire adversary to mess with the wonder app. This resulted after the New York Times' claims that ChatGPT unlawfully web-scraped it's newspaper for training-data without the paper's permission. Since this is kind of a ridiculous spat, we aren't going to cover this completely (click here to read about it and save yourself the headache), but it still opens up a gaping portal of discussion.


Not to mention, the whole SAG-AFTRA fiasco that occurred last summer of 2023, in which thousands of our favorite Hollywood stars, directors, producers, (etc.) protested generative AI because it was being used to replace human talent in the film and media industry. I never thought I'd see the day when millionaire actors and celebrities -people who are typically so far removed from society and never have a care in the world- would get so triggered that they would launch an unparalleled movement to defend humanity's rights against AI. The top 1% was in an uproar. They took to the streets to boycott the fact that ChatGPT was being used to write TV pilot scripts or even late-night jokes for SNL (which were even less funny than the human writers, by the way) because ultimately, they saw themselves being forced out of a job. SAG-AFTRA is probably the craziest example to arise from the long history of AI vs. the human job force, and it's likely that we will encounter further tiffs in the near future.


How To Use ChatGPT While Still Exercising Your Brain Cells



This brings us to the most important part of Silicon articles: "Okay, you've doomed us with all the problems, now what about some solutions?" First off, let's focus on the younger generations. Obviously, times have changed so drastically, it's unreasonable to think that we can function without Artificial Intelligence and AI-incorporated systems. If we banned AI completely from the world, we would be taking away the bountiful contributions it has made to society (such as advancements in medicine, transportation, agriculture, etc.). Therefore, here are some important points for every stakeholder to consider, but modified for reasonable use of AI:


  • Teachers: Teach your kids to adapt ChatGPT to their advantage, not rely on it as a crutch.

While the teachers I met in the coffee shop were fed UP with AI, it's clear that a lot of academia is still riding the wave. Teachers: We know how much you love ChatGPT for grading purposes, but you should only use it if you are trying to foster higher rates of creativity and authentic growth in your students’ learning experiences. For example, ChatGPT is an awesome tool for streamlining class activities, in which you can teach your kids how to write code, structure the perfect essay, or formulate topics for an in-class discussion. You can also use it to create assessments on class notes, check your students’ material for plagiarism, or craft the perfect email for that one parent who is just always “too much” to deal with. Some examples of what you should NOT use ChatGPT for is: 


  • Creating exams or quizzes not based on class material (Obviously evil)

  • Grading essays according to ChatGPT’s standards, subjectively scraped straight from the Internet

  • Encouraging plagiarism/poor citing of sources (If we all had to go through the pain and suffering associated with MLA and Chicago citations, then Gen-Z and Gen-Alpha have to go through it too.)


Overall, ChatGPT should not be utilized as a replacement for your role as a teacher.


  • Students: Remember to be independently creative.

Students are often victims of imposter syndrome, in which we don’t realize that we are smarter than we think we are. I know it’s easy to look around at the world today and feel like everyone or everything is too complicated or exhausting to understand, but just be assured: Your ideas are good. There’s nothing more admirable or refreshing than a thought that originates from your own brain, so if you ever feel like you are encountering a tough roadblock, know deep-down that this is -in actuality- a good thing.


As per this long-winded lecture, it simply means that you are smart enough to exercise your brain cells, because nothing ever comes easily. If life feels too easy, then you’re probably not doing it right…meaning you aren’t meeting your full potential. Therefore, write your own essays. Use ChatGPT to edit or rephrase a messed-up paragraph, but please don’t copy and paste whatever it mechanically spits out. If you do, two things will likely happen: 1. You get sent to the Honors Board (or whatever weird school panel for dealing with plagiarism crimes) or 2. You move through life less capable and more “basic”, and there’s nothing worse than being known as “basic”. 


  • Managers: Implement policies and access-controls geared towards protecting PII and mobile data.

It’s definitely a great idea to implement some internal policies for how your team or department should conduct themselves when using ChatGPT for business purposes. A good place to start: I distinctly remember this past summer when I overheard one of my coworkers ask my manager if they could put company data into ChatGPT to help put out a PowerPoint faster, and my manager literally gave him the death stare. So for that specific circumstance…hard no. But depending on your data’s classification level, you can discern what data is okay or definitely NOT okay (such as in my manager’s case) to plug into ChatGPT’s algorithm.


However, when considering which data is safe, AGAIN, it is important to remember that ChatGPT and other generative AI apps are not security-proof. They may cut production time in half in order to meet a deadline, but it still has a lot of vulnerabilities to correct, as seen in NIST’s National Vulnerability Database. Additionally, before implementing your own internal policies, you should also check in with your third-parties to ask if they have any data that should not be used in conjunction with ChatGPT. Remember that when handling third-party data, it is always vital that you are adhering to cybersecurity due diligence at every stage of the data retention process, from the creation to the destruction of PII. This means no plugging your partner's sensitive data into ChatGPT without asking to do so.


  • Users: Pay attention to your behaviors and OpenAI's copycats.

For the user focus group, ChatGPT is probably the most exciting and fun tool to play around with, and we encourage you to do so! Take advantage of studying and experimenting with Generative AI up close before further regulations get put in place. A lot of people have proven that you can do some pretty amazing things with ChatGPT -as we’ve listed earlier in this article- so use away! The only caution we have to give is to just make sure that you have some form of mobile security protection, such as a strong password, Full Disk Encryption (FDE), and performing regular backups. 

Conclusion



In hindsight, this post started off with good intentions not to scorch ChatGPT with the burning fire of a thousand suns, but I guess I ended up doing that anyway. Sorry about that. However, I still think that generative AI and ChatGPT are highly useful assets to society...as long as they continue to improve securely. However, I can at least guarantee that I'm keeping things "old school" at siliconcyberai.com, meaning I'm definitely NOT using ChatGPT for squat unless it’s spell-check. Anyways, to nudge this tirade back to full circle, I imagine that someday in the future, we're all going to look back with nostalgia at these crazy years and ruminate to our dumbfounded grandkids: “It was the 2020’s! We just did weird stuff with AI all the time!” just like our parents said about the 70's (in response to the confusion about shag carpeting). Anyways...to sum this up, please just use AI for good and not evil. That's all for now.


Sources

Comentarios


bottom of page