A well-known specialist in the subject argued against the notion that a new government body should be established to control artificial intelligence, saying that people need to rethink the way they think about regulation.
“Regulation is a really hard question,” Andres Sawicki, a University of Miami law professor and director of the BILT concentration, told Fox News Digital. “The subject of AI is too vast to be addressed in a single comprehensive manner,”
Sawicki suggests taking a more practical, incremental approach to AI rather than tackling it all at once.
Think about real-world challenges that AI is affecting, such as copyright and patent concerns, he advised. “Pay close attention to the implications technology is having and how AI is affecting this or that issue. The Department of AI shouldn’t be in charge of handling everything at once.
Sawicki’s remarks come as the notion of a new regulatory body exclusively for artificial intelligence (AI) is gathering traction on Capitol Hill. Sen. Michael Bennet, D-Colo., for instance, put forth legislation last month that would establish a new federal agency to oversee AI.
Sam Altman, CEO of OpenAI, spoke before a Senate Judiciary Subcommittee about the necessity for governmental regulation of AI technologies a few days prior to Bennet’s proposal. Numerous senators from both parties offered their support for the creation of a federal agency to oversee the disruptive technology at the same hearing.
The fact that no one can predict what will happen next appears to be one of Sawicki’s concerns about the proposal.
Uncertainty, he remarked, “is the best word to describe this area.” Although the technology is currently highly remarkable, it seems that industrial organisation and geopolitical ramifications are still developing. I would warn that the way things currently are is probably not the way they will be in six months, a year, or even five years. The AI leaders of today might not lead the field tomorrow. The objective should be to promote transparency and competition in the face of such uncertainty.
Sawicki mirrored the worries of other AI specialists, like DeepAI creator Kevin Baragona, who recently told Fox News Digital that he had his doubts about the federal government’s capacity to deal with AI and that insiders aren’t any more prepared than the average consumer for what’s to come.

Whether AI will ultimately be a force that harms or benefits humans is a significant subject that worries many people. A new warning was released by scientists, professors, and executives in the tech sector last month, which was echoed by the Centre for AI Safety: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Sawicki believes that both optimism and reservations about AI’s potential advantages are valid.
It’s difficult to conceive such scenario considering the state of the technology today, he added. “You can imagine a ‘Terminator’ future where drones and robots decide that humans interfere with their goals and should be eliminated,” he said. “The technique also shows a lot of promise in other fields. People could potentially have a lot of access to knowledge, for instance, by learning with a live chatbot that can respond to inquiries in the real world. While keeping in mind the possibility for negative results, we must concentrate on these kinds of possibilities.
Sawicki went on to remark that humans, who are capable of evil deeds and who essentially act as “mediators” between AI and the real world, are at least as big of a worry as AI itself.

Additionally, he claimed that while AI would upset the economy, society as we know it will not completely change.
Digital technologies “replaced phone operators,” he said. It’s possible that AI will replace some jobs in the future, but it won’t take the place of people. The majority of people would agree that the Industrial Revolution was worthwhile despite the employment losses.
When asked how crucial it would be for the U.S. to surpass rivals like China in the field of artificial intelligence, Sawicki responded that it would be crucial, but cautioned that the dynamics weren’t the same as those in a conventional arms race between competing nations.
Being the forerunner of a potent developing technology might be advantageous for us, but comparing the AI race to the race for nuclear weapons is not the ideal comparison.