Looking at the piece of writing that was being projected onto the SMART Board in my classroom, my colleague leaned over and whispered, “I think that was AI generated.”
We were in our weekly after school writing club meeting, and I had pulled up the past winning essays that were posted onto the website where our students were preparing to enter the current writing contest. I thought it would be helpful to see what had previously been selected.
We had read them over, and then the students shared what they noticed, what they liked, and had basically deconstructed the writing to try to uncover any tips and tricks that might help them in their writing.
They were now hard at work on their own pieces of writing.
I looked at my fellow writing club sponsor, himself a former journalist and talented writer. “Really?” I asked. That had never occurred to me.
“Pretty sure,” he nodded, and then he began to point out some vocabulary, phrasing, and descriptions that had made him suspicious.
The use of AI tools when creating and writing is certainly not new. They are resources that can range from spelling and grammar checks to brainstorming assistance. Clearly though, the recent availability of new AI tools has changed the landscape. Large language models. Small language models. Chatbots. Virtual assistants. It is endless.
Even here, when I log onto my own author website to post my blog, I see a message asking me if I would like AI assistance in writing my blog. Yikes! (Rest assured- I do not use AI at all to generate any of my writing! 0
As an adjunct professor at the local university, I recently went through a training in which the University made it very clear that AI is here to stay, that it has almost limitless potential, and that we owe it to our students to make sure they know how to use these tools ethically and effectively.
And right there is the conundrum. What are the ethics of AI generated work? Clearly, creative and artistic endeavors must (or should) have different ethical guidelines than other more analytical ways to use AI?
What does it mean to use AI ethically? Most of the information and research out there right now on AI and ethics has to do with the issues of bias, transparency, and privacy.
Recently in a writer’s group to which I belong, there was a very interesting debate on illustrators using AI tools to create artwork. Should they let the writer know? Can the writer demand no AI generated artwork from an illustrator with which they are working? How can that be verified?
Like most technological advances, we are still in the ‘figuring it all out’ stage. There are clearly more questions than answers right now. That means that discussions, debates, and differences of opinion are going to be the rule while we work to create accepted norms. Not just industry norms, but there will need to be norms that are narrow and specific for things like artistic work as well.
AI will most definitely be impacting every single one of us going forward, (even if you are not a teacher or a writer) so it is a topic we should all be watching very closely.
Comments