Software security pioneer and AI expert Gary McGraw talks to Dennis Fisher about the risks of black box LLMs in AI and the need for regulation.
Software security and AI security expert Gary McGraw joins Dennis Fisher to discuss the findings of a new AI architectural risk analysis research paper that his Berryville Institute of Machine Learning did on LLMs, the risks of black box models, and what kind of regulation would be most effective at reducing those risks.
Under the now-live White House executive order requirement, developers of the “most powerful AI systems” to report “vital information” related to cybersecurity measures, training plans and more.
The development and deployment of AI systems based on LLMs includes many inherent risks and should be regulated, and soon, experts say.
2023 was one of the crazier years in recent memory for security news, and we did our best to make sense of it all. We gathered some of our friends to talk about what the biggest stories of the year were and what we learned from them.