The advancement of generative artificial intelligence (AI) in recent years has made significant changes in the fields of education and software engineering industry. Though controversial, in both cases, AI has been frequently used to “streamline” progression. Whether it comes to explaining concepts or having it write code, it’s clear that its benefit of simplifying things may lead to a greater dependence on it in the future. It is important to note that AI is not necessarily perfect, however, many fields are making advancements to adapt and integrate AI within their respective spaces.
To give an overview of my usage of AI in ICS 314, I’ve primarily used AI to explain errors within my code to help me with the debugging process. I’ve also used it to find alternate ways to write code to achieve efficiency. As for the tools I’ve used, I primarily used ChatGPT and GitHub Co-Pilot for coding help. Moreover, I used Google’s AI assisted search (but to my knowledge, there’s no way to turn this off in Google so in a way it’s unintentional or forced upon me).
For a majority of the Experience WODs, I did not use generative AI. In the cases I did, it would be only during the first attempt of the WOD, where I had not watched the solution video yet. As a result, if I ran into something I was unsure on how to implement, I would turn to AI. An example of this is E12: Jamba Juice 1, where the 3rd and 4th parameter of the given constructor needed to take key-value pairs. I had no idea how to do it in TypeScript so I used ChatGPT to help me implement it. Besides that, after watching the solution videos, I would then have knowledge on implementing the requirements so I would not use AI beyond the first attempt (or not use AI at all if I knew what to do initially).
As I did not keep record of my in-class practice WODs, I can’t give an accurate estimate of my AI usage. However, I do remember turning off GitHub Copilot a majority of the in-class practice WODs and trying my best not to rely on AI in general since I wanted to find the results on my own in preparation for the actual WODs. A situation where I did use it was during the functional programming WOD, where I did not know which methods to achieve one of the requirements.
Similar to the in-class practice WODs, I remember turning off GitHub Copilot for most of them and not trying to rely on AI in general just to gauge my skill without AI. When the time started running out and pressure started to kick in, that’s usually when I started to rely on AI as I made a goal earlier in the semester to pass the majority of the WODs. An example of my usage of AI in an in-class WOD is the Coding Standards WOD (Baby-Bieber). Similar to the in-class practice WOD example, this one involved functional programming and trying to figure out how to produce a result given the methods of functional programming. I remember failing this one, however, I think it was still a valuable learning experience on how sometimes AI may not be reliable.
I’ve never used AI for the essays in this class. A mindset I have for writing essays (this is speaking generally, not just for ICS 314) is to not use AI at all because I believe it is ingenuine to write an essay with AI assistance, especially when the prompt (which was in the case in this class) asks for a reflection. The disclaimer in each of the essay assignments also talks about how using AI for these essays doesn’t really reflect your “voice” which further supports my stance on not using AI for writing essays.
At most, I understand people using AI to come up with a title and checking for spelling and/or grammar errors. I don’t really see a reason to use it for making outlines, coming up with talking points, or just flat out writing the content of the essay. Overall, I feel confident enough in my writing ability to not rely on AI.
I relied heavily on tutorials and AI assistance for the final project. For my final project, we worked on a company connector. There were a lot of features we wanted to implement that we didn’t really know how to do, hence why me and my team used tutorials and AI. My part of the project involved a lot of work on the back-end and setting up different user types (like a student account or a company account). I was unsure how to model the user types, let alone how to call the data values for each respective user within their respective user type.
A more specific example would be how I tried to model a user to be a company and then made this company link to a set of jobs. I used AI to help with linking these models and actually implementing individual companies to the jobs within the code. Majority of the back-end tasks I did involved me heavily relying on GitHub Copilot and ChatGPT due to not being too experienced with working in this area.
I did use AI to help with learning a concept especially through giving coding examples. One example would be in the final project. I wanted an example of relationships in Prisma because our company connector app required different parameters for a student user and company user. So, I asked ChatGPT, to help with how to model one in a different context like a student, teacher, and parent user in a school conference app. I basically tried to apply these concepts in actually implementing my own student and company user for our company connector and I’d say it was very helpful.
For asking questions, I resorted to Google, forum posts (like Stack Exchange), or AI for more implementation stuff. Aside from the examples above, another way I used AI to ask questions was when it came to some tutorials or downloads. I felt like some of the download tutorials (like Prisma or PostgreSQL) were very uninformative and confusing, hence why I just used AI instead of following their instructions.
When it came to answering questions or smart-questions in Discord, I only answered a few and they did not need the use of AI. For example, someone asked what needed to be put in the gitignore file and I just answered with the files that needed to be listed in the gitignore.
I don’t recall using AI to explain code that much. The only times I’ve applied it was when working on the final project, there was some code implemented by my teammates that I didn’t really understand so I would use AI to explain them.
This has been covered in my explanations in WODs and the final project. To summarize, for things that I did not know how to implement, I would use AI to assist me with writing the code.
I didn’t really add comments or documentation in my code this time around so by extension, I didn’t apply AI in code documentation.
Throughout the duration of the course, I definitely ran into a lot of bugs I could not track down but with the help of GitHub Copilot, it definitely helped with the debugging process. One example was during the final project and it involved data querying. It was a simple mistake but it involved an add function (specifically adding a job for a company) and how one of the properties was not included in the validation schema, preventing the whole query from working. By using GitHub Copilot, it explained what the error was and directly pointed to the location of the error.
N/A
I think the incorporation of AI to my learning experience isn’t too great. While it has definitely helped me a lot in this class and other courses, it deprives me of the experience and the struggle with getting a solution. I also think it doesn’t help with memorizing or retaining information on things like how to implement something or what the functionality does. Its convenience has definitely been a bad influence on me and in the future, I need to keep myself accountable and avoid using AI as much as possible to get the most of my learning experience.
One thing that comes to mind is AI being used as chat-bots in a lot of sites. It could be used to talk to customers (in the setting of online shopping sites) or helping people find information (for example government sites using chatbots). I think it serves its purpose as a quick form of communication but this disconnect between human interaction has an impact on how reliable it can be. For instance, the messages AI produces may not be helpful and the nuance picked up in human interactions may not be caught. Overall, this is just one example of a practical application of AI.
AI is definitely not perfect and there are limitations to its effectiveness. One instance I encountered was in the final project, in which AI suggested that a “solution” to our problem with deploying our app was changing the code in files that involved connecting to the database and the API that helped with authentication. From our understanding, these files should’ve never been touched but because we did, it only produced more errors. The existence of “AI hallucinations,” or AI providing misleading information, are very much present and acts as a reminder that AI is not perfect.
I think traditional methods achieve in giving the foundations and a deeper understanding of the content compared to what AI could ever give. Moreover, from my understanding (and experience in previous programming courses), traditional approaches usually involve “labs” to help students with programming assignments but acts as a space for more learning. As such, the traditional method has a much difficult learning curve but you would learn more this way. In contrast, I think AI-enhanced approaches “simplify” the content which results in an easier learning curve. I think in this manner, it would keep people more engaged. Aside from that, retention and practical skill development would be better in traditional methods. Overall, the traditional method is difficult but you’d learn more while an AI-enhanced approach is easier but you may not learn as much.
I think AI will remain integrated in software engineering education. The fact that it can streamline learning programming makes it more approachable for many students. In the future, I can see advancements in machine learning to make more AI efficient in educating others and reduce “AI hallucinations.” Though, as mentioned in the comparative analysis, there are challenges in which software engineering education can be too “simple” to the point where it may not be overall effective in teaching students the actual fundamentals. In that way, students may be deprived of quality education that’s usually received in traditional methods.
To summarize, AI is definitely a useful tool in this software engineering course. It can assist in many aspects of programming, such as writing code and even debugging. In addition, its ability to teach and explain concepts helps with the learning process. Though, it is also important to keep ourselves accountable with usage of AI as it can deprive one of their learning experience. Moreover, AI is not perfect and not the solution to all problems regarding software engineering. As such, a recommendation for “optimizing” the integration of AI in future courses is to drill the idea that it can both be helpful and harmful. Students should be responsible for their own learning and try to make the most of their experiences without relying too much on AI.