On Large Language Models (AI) and Aerospace Education

Artificial intelligence (AI) is changing all aspects of our lives, much like the internet did when it became widely available to consumers in the mid-1990s. There are many discussions about how the AI revolution has affected different areas, including the workplace, art, culture, writing, and academics. Recently, the “ChatGPT: Optimizing Language Models for Dialogue” has been making significant impacts in these areas.

The development of large language models was initiated at Google, where they were working on creating algorithms for text translation (e.g. English to French). This model was later published in an academic paper, and companies like OpenAI quickly adapted the approach. For a technically minded audience, I would recommend the free article at Ars Technica (Jan, 2023, https://arstechnica.com/gadgets/2023/01/the-generative-ai-revolution-has-begun-how-did-we-get-here) to understand the algorithms. Despite initial skepticism, OpenAI is now receiving billions of dollars in investment from companies such as Microsoft.

As a professor at the University of Florida, which is at the forefront of integrating AI technology in research and teaching, I have seen firsthand the impact of AI in the classroom. The University of Florida has the world’s largest NVIDIA-based supercomputer, which has been instrumental in advancing AI research.

However, many of my colleagues at the university are concerned about the effect AI is having on students’ understanding of the material. These concerns are not limited to the University of Florida and are being discussed at universities across the United States. These discussions at University of Florida and some other regional and local universities within the United States are detailed in the New York Times (Jan, 2023, https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html).

In my class this semester, my graduate students are required to write a ten-page term paper in the style of an AIAA Journal article. I’ve noticed improvements in their writing, but at the same time, I’ve also noticed a decline in their understanding of the material compared to previous years I’ve taught the class. I suspect that they are using a transform algorithm to assist with their writing. The question remains, how should we respond if our goal is to teach critical thinking (as discussed in my article in the previous NASA Newsletter)?

The genie of language generation and AI is out of the bottle. AI and its use will not leave the classroom, workplace, or industry, even if rules are made against it. This is a new revolution that is happening. In my class, I have instructed the students to include a new section under Acknowledgements in their term papers. They must specify exactly how they used AI, if they chose to do so, to aid in their writing. AI should not be used to write a term paper in a university, but it can help revise and guide the writing. Perhaps, this is the most ethical approach to take.

The question of whether humans can differentiate between AI-generated and human-written content remains to be seen.