When I registered for my ICS 314 class during the Fall 2024 semester, I was told by people who had already taken that class that AI tools like ChatGPT and GitHub Copilot were allowed, and sometimes encouraged. This caught me by surprise, as most classes at the University of Hawai’i at Manoa prohibit the use of AI entirely. However, the goal of allowing AI in ICS 314 was to be used as a tool. It was completely up to the student to decide on how they were going to use this tool, because AI can sometimes be used to completely write code for the user, not allowing any room for learning, and could get in the way of learning basic programming skills when it comes to software engineering. In my time in ICS 314, I mainly used ChatGPT and Github Co-Pilot, tools that made writing code significantly easier.
In my time in ICS 314, I mainly used ChatGPT and Github Co-Pilot, tools that made writing code significantly easier. However, at the start of the semester I had the goal in mind to use these AI tools to assist me in my learning, but once I had started to fall behind in learning things like syntax and how the these new programming languages worked, I quickly began to rely too heavily on these AI tools to do the writing for me. In the beginning, I used AI to help me correct my syntax and made sure things worked. I would write the code first, send it to the AI tool, and have it fix it for me. I found it to be a good way to teach myself the language I was learning. More towards the end of the semester, I found myself barely keeping up with the fast moving pace of the course, and relied on AI to write my code in its entirety for me, and I would fix up the small mistakes it would make. I found this method to be extremely efficient, however I was losing out on learning.
Here are some of the course elements where AI could be used in ICS 314:
These are timed exercises that we would work on as practice for the timed exercises we would have in class. For the experience WODs, I used AI tools. An example of using AI in this case was when I would get stuck after doing my first attempt at the WOD, I would watch the solution video given by the instructor and ask AI how they would write it based off the video, and I would write my code off of what the response was.
For the In-Class practice WODs, I would use these AI tools similarly to how I used them for the experience WODs, but since there were not any solution videos, I would have AI generate my code on the spot based on the webpage that gave us the instructions. However, during the practice WOD sections, I found AI to be constantly putting itself in a constant loop of generating an error, and fixing that same error, only for that same error to pop up again, since these were just practice WODs, working with classmates as allowed, and I was able to find solutions. That would be a case where AI could not give me answers, and it showed up pretty frequently for the practice WODs.
For the real, timed, in-class WODs, I’m pretty sure I used AI for every week except one, when the WOD only took maybe 10 minutes. That one WOD where I didn’t use AI was pretty early in the semester when I was keeping up with learning the material, and I wanted to challenge myself to see if I could pass a WOD without AI’s help. But, after using AI in previous and later WODs I found it to be a tool that could help write most of my code under the specific time because I’m really sure that I wouldn’t have passed any other WOD without the help of AI.
For all essays that I wrote in ICS 314, I stayed away from having it write the entire essay for me, since most of the essays were supposed to be about reflection, and what we thought on the matter. I couldn’t have AI represent my voice for me, I had to write about what I actually thought about it. The only parts where I had AI help me with essays was the grammar. I used tools like Grammarly to help with fixing my poor grammar in my essays.
For the final project, I used AI for pretty much everything. For any issue I was working on, I spent quite a bit of time writing long prompts for the AI to make sure I was getting what I wanted out of it. For example, I would copy and paste the code my classmate had worked on, give it to the AI, give the prompt on what I had in mind, and I would test what it gave me back. I found that method to be really efficient and effective when it came to this project. The only downside, was that all the things I learned for React and Nextjs will be quickly forgotten as AI wrote most of my code for me.
In the early parts of ICS 314, I used ChatGPT as a way for the AI to explain everything on how code worked. For example, I was having a lot of trouble understanding how TypeScript worked and how it was different from JavaScript. I asked ChatGPT how they were different and why we should use one from the other, and its explanation was extremely easy to understand.
I do not think there was a time where I had to use AI for any questions in class or in discord. I rarely had the opportunity to answer questions anyone had in the discord channel mainly because there were other people who were quicker to answer, and perhaps that had a better explanation than I did.
I rarely asked a smart-question in the discord chat, but if I had any troubling questions, I would give it to ChatGPT first, and if I could not get an answer out of it, then I would ask the discord chat. However, this only happened once when I was having trouble setting up ESLint, and other people were having the same specific problem.
There were many times when I asked ChatGPT to give me examples to write code. Especially during the earlier part of the semester, when I was having trouble understanding TypeScript. While I was watching screencast tutorial videos given to us by the professor, I had ChatGPT open on the side to run an example of code I was having trouble understanding while watching.
There were many instances where I had ChatGPT explain code for me. The two main instances where I had it do so was during the experience practice WODs and during the final project. During the practice WODs, I had ChatGPT explain to me how the solution example code worked, and how it differed from my own code. During the final project, I had the AI explain to me how my classmate’s code worked because I did not want to mess up anything they wrote because of a small mistake I created.
The majority of the code that I wrote during WODs, final project, and practice WODs, most of the code generated was from AI. During the WODs, I saved time by just copy and pasting the problem from the ICS 314 website to ChatGPT, and I would fix any errors that it would come up with. Granted, doing this method severely hindered the learning process for me, and found myself heavily relying on AI to write code for me.
There weren’t many occasions where I had AI document anything like comments for me. The only times they did write comments for me was when I asked them to highlight the changes they made to code they had just edited previously so I could compare the changes.
The only quality assurance I had AI do for me was fix small ESLint errors like incorrect spacing. In the early parts of the semester, if I had written what I thought looked very unorganized, I would have ChatGPT reorganize the code to make it look better.
I don’t think I used AI for any other instance except for the ones listed above. I kept AI used mainly for writing, debugging, and solving ESLint errors.
During my experience in ICS 314, I found that AI was a tool that hurt my learning and understanding of material. After falling behind slightly, I found myself using AI to just completely write code for me instead of trying to learn the material myself. I found AI to be a shortcut and I regret making that decision, because I found myself having learned not that much. I still believe that if AI is used correctly, it can help enhance the learning process, rather than getting rid of it altogether. Through this class I was able to understand that AI is a way for programmers to be more efficient, not in the fact that AI can write lots of code for you, but rather help the programmer be more efficient as programmers, having AI as an assistant to help debug, and put programmers on a fast track to understand complex code can be very beneficial.
Outside of ICS 314, I found AI to be really helpful with practicing a second language. During my time in JPN 101 and 102, I found that having very elementary conversations with AI tools like ChatGPT make very good practice for Oral Exams. ChatGPT does limit their model that has voice conversation behind a time limit, but in that short time, it provides lots of practice if you are unable to get help from a real native speaker.
As mentioned before, one of the main roadblocks I encountered using AI in this course was the constant loop it kept putting itself into. The AI would create a solution for the code, then would encounter an error within the code it just created, fix it the same way it did before, and just keep putting itself in a loop of fixing the same unfixable problem. I would encourage more opportunities of further integration of AI in software engineering education, but I would put heavy emphasis on using the tool responsibly, as it can strip the student of their learning opportunities, and in the long run, it’ll hurt that student’s career.
When comparing traditional teaching to AI-enhanced learning, there are benefits on both sides, but I think AI has more pitfalls. I think the main benefit to working with AI is that you can ask very specific questions to a very specific problem, and can give you a specific answer. However, there are many cases where I found AI to be wrong with writing certain code, and had to correct it. I’m also not sure how well knowledge is retained if learning new things through AI, as the responses they put out are cut short and can leave out some valuable information. I believe learning face to face with an instructor, has longer knowledge retention, and can be properly engaged with them. There were plenty of times where I found myself having a hard time engaging with the AI model because I’m just reading text off a screen, like I would with any textbook.
Overall, I think AI is healthy in software engineering education, as long as it is used with responsibility and not used to cut corners. I think there should be more stress put on students that AI should be used as a tool and not as a way to cut corners. I believe that generating code for students should be prohibited, but using these tools to help aid students while writing like CoPilot can be good. For future software engineering classes, code generation should be prohibited, but AI tools can still be used. There just needs to be a way to enforce that rule, like code generation detection, which can detect if a student is generating large portions of their code.
Through my experience at ICS 314, I discovered that AI use with software engineering can be dangerous to the learning and understanding of the material. If used responsibly, students can find themselves working much more efficiently, saving lots of time. I believe that AI is a good tool to use to help understand specific problems and conflicts, while generation of large chunks of code should be prohibited. I regrettably admit that I have used AI irresponsibly in this class, and have learned that using AI this way hurts the learning process of new material. I regret to say that my misuse in AI has led to very little learning in my time in ICS 314, and intend to change my method of using AI to a way of enhancing my learning, rather than getting rid of learning altogether.