AI AN IN-DEPTH REPORT Venkatraman: With basic coding, I think you’re right that language models can take care of that. But it’s more important to teach students how to think algorithmically to solve problems independent of the syntax of the language. That higher-level cognitive thinking and the problem solving, that’s still important. Hanley: When it comes to needing to overhaul education, educators at the K-12 level — and in higher ed — are already burned out and exhausted. There is some real overwhelm in thinking about all that the generative-AI revolution indicates. I worry that the project is too big and also constantly shifting. There’s no moment in which we can all take a breath and say, “OK, now I understand, I see the landscape, I understand what I have to do.” Because every day something has changed, and the thinking evolves, and the needs shift. Venkatraman: I was talking to somebody from the Department of Education, saying how important it is for us to revamp our curriculum. And the answer I got was, “We do review K-through-12 curriculum, but only once every seven years, and then we usually decide not to do anything about it.” But seven years is a whole generation when it comes to technology advancement. So you have to have a process in place, more than doing a one-off. OB: What should policymakers be thinking about in terms of workforce, in terms of ethics, in terms of education policy? Jennings: When I was a fellow at the Atlantic Council, I had a congressman tell me, “AI is just another fad, like CB radio.” And curriculum reform is fast compared to some congressional processes. It’s my hypothesis that Congress cannot regulate AI. It’d be a rowboat chasing a JetSki. What we need is a model that we’ve already tested, that is highlighted in the Oppenheimer film, which is the Atomic Energy Commission as set up by Truman. It’s part of government, but it’s outside of politics, and it’s staffed by people who are A) technical experts and B) absolutely devoid of conflict of interest. It didn’t just regulate, it researched. The government invested a huge amount in looking at this technology, which was also a potential threat to the world and also potentially had great service for medicine and energy and other things. AI is the same way in my view, and we need a new entity, an AI Commission. I think they need to do the red teaming to completely outside of Big Tech so that we don’t just leave that incredibly important safety function to the technology people themselves, because if we did, it’d be like asking Big Oil to set our pollution guidelines. Venkatraman: I think policymakers need to develop frameworks that can look at the fairness of AI systems, you know, particularly around hiring and criminal justice and loan approvals. Because that can lead to problems down the road. The second thing I’d say is transparency in the input to these models, the data that’s being used to train those models. The models are only as good as the data that’s used to train them. Hanley: We need money for research, money for training. We need to be thinking not just about risks and managing those risks, but also about capitalizing on the opportunities. I do think opportunities are there to address gaps in education, in justice, in access to information. There are real possibilities The Atomic Energy Commission was created in 1946 to manage the development, use, and control of atomic (nuclear) energy for military and civilian applications. The AEC was subsequently abolished by the Energy Reorganization Act of 1974 and succeeded by the Energy Research and Development Administration (now part of the U.S. Department of Energy) and the U.S. Nuclear Regulatory Commission. Source: NRC.gov PHOTOS BY JASON E. KAPLAN K S Venkatraman, Skip Newberry and Cass Dykeman 32
RkJQdWJsaXNoZXIy MTcxMjMwNg==