AI AN IN-DEPTH REPORT going to be a point very quickly — we’re there, right? — where an LLM can write an academic article and an LLM can review an academic article. So where are we and what are the implications? Jennings: As a writer, I agree, use the tools to teach people how to write a good prompt and follow a good thread, but we also have to keep that skill of writing. Like in math, we don’t want people not being able to write out a quadratic equation, do we? They have to be able to at least learn the skill and then take advantage of the machine. I think all the AI stuff around us just raises the issue of what does it mean to be human? We’re going to have to discover that in new ways. Part of it is going back to some things like storytelling; that is a uniquely human skill that’s not going away. No jury wants to see an AI spout off about something. And emotion and connection —these other uniquely human qualities that I think are going to become more valuable, more treasured over time. Venkatraman: At the Oregon Talent Summit, I was on a panel discussion where they were asking, “What are the skills that will be required in the future?” My emphasis was entirely on these human skills: things like collaborative problem solving, complex problem solving or critical thinking, and empathy and communication. The most important skill, in my view, was the self-critic aspect of things: How do you learn from your mistakes and constantly improve? I think we need to build a foundation in these skills. That’s one thing that we can do, because these are not skills that can be automated. Skip Newberry: Do you think there’s a healthy dose of skepticism that needs to be taught better, or more so — in whether it’s K through 12, higher ed — as it relates to understanding some of how the systems work and where they can fall short? I think that otherwise, you end up in a situation where everyone ends up reinforcing this kind of common standard over time. You lose some of the creativity opportunities over time if people just fall into what’s easy, rather than being able to critique both the underlying tools as well as what they are producing. Hanley: I believe that people need foundational knowledge in addition to the skill of using an LLM. They need the knowledge also to critique and improve upon whatever is produced by this automated system. There’s a real risk of sort of homogenization — of losing not just unique humanity but diversity of perspective and voice and thought. We are going to lose a lot of ideas and voices if we rely on AI too heavily. Jennings: One of the things I learned working with AIs is that they lie. There’s no moral compunction to tell the truth. They’re like my little dog: They want to please, so they’ll tell you something, even if it’s not quite true. We need to develop this hypercritical look at the information we see. We see it in politics. We see it in society. There’s a lot of misinformation, disinformation and misdirection coming at us, right? How do we filter all that? Well, hopefully we’ll be able to get some help from AIs as well. But humans have to guide that, I think. Venkatraman: There is this reinforcement learning with human feedback that’s important. These models are too new. They’re wrong, but they’re confidently wrong. You have to tell them that they’re wrong and help improve them. To your point about skepticism, I think we have to be careful about people overestimating AI’s capabilities. There is a lot of hype about AI The 2022 Oregon Talent Summit convened a broad array of thought leaders to discuss the implementation of former Gov. Kate Brown’s Future Ready Oregon package of investments, which included a $200 million investment to foster a diverse, skilled workforce. K S Venkatraman PHOTOS BY JASON E. KAPLAN 30
RkJQdWJsaXNoZXIy MTcxMjMwNg==