Oregon Business Magazine - February 2024

AI AN IN-DEPTH REPORT and collaboration with the industry. I think you’ll see a lot of good guidance on security and ethics coming from NIST over the next year. Cass Dykeman: My students ask me, “What can I write using LLM and what can’t I write?” I told my faculty, “I am not going to play AI Police. I’m just not going to do it.” I want them to learn how to effectively use it, so I’ll have them use an LLM to write a term paper, and then I’ll have them critique the paper. Now, the really smart ones use another LLM to critique the paper — which is fine. I’m also trying to work on an architecture that will come up with research questions, gather the data and write an article. Then I’m going to submit it to a journal — completely transparent about what I did — and see what the journal does in terms of reaction. Rebekah Hanley: It’s a time of shifting norms, and I think there’s a lot of diversity of thought in the university — and K-12 as well — about what’s appropriate in terms of reliance on generative AI for research, for writing, for editing. We’re empowering and encouraging students to ask a lot of questions, to clarify with whichever instructors overseeing a particular project: What is allowed? What is expected? I do think that there’s a real risk of well-intentioned people getting sideways in terms of academic integrity questions. There’s just a good-faith disagreement about whether what they did is consistent or not consistent with course policies or university policies relating to independent work product creation, plagiarism. I think in terms of citation, we’re going to see, and probably already are seeing, a lot of disagreements about what’s appropriate, what’s inappropriate, what’s cheating. OB: I was just thinking about how not that long ago, teachers would have said that using spell-check or grammar check is cheating — because you’re supposed to learn how to spell and you’re supposed to learn the rules of grammar. Now I don’t think anybody would say that. I say to writers, “If you haven’t spell-checked your story before you turn it in, you’re not done with it.” I wonder if the way we think about plagiarism is going to change in the coming years because we have these tools. Venkatraman: They said the same thing about the calculator. I think that repetitive nature of things that we do is getting increasingly automated, and so that frees us up for some higher-level cognitive thinking and some multistep reasoning, which these models are not capable of today. It’s a good copilot to have. Dykeman: I’m a reviewer for a number of journals, and I’m getting emails from the journals saying, “Do not use LLMs to review your articles.” Even the artificial intelligence journals are saying this. But it’s “The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST’s activities are organized into physical science laboratory programs that include nanoscale science and technology, engineering, information technology, neutron research, material measurement, and physical measurement. From 1901 to 1988, the agency was named the National Bureau of Standards.” Source: Wikipedia “Large language models (LLMs) are machine learning models that can comprehend and generate human language text. They work by analyzing massive data sets of language.” Source: Cloudflare Skip Newberry, Cass Dykeman and Rebekah Hanley 29

RkJQdWJsaXNoZXIy MTcxMjMwNg==