
And it was even bigger as a bank than the previous two that failed. In fact, a third bank, First Republic Bank, collapsed. And after these two banks had failed, Silicon Valley Bank, then a couple of days later Signature Bank, the hope, and I’d say the expectation, was that this crisis might be over. So, Jeanna, another day, another bank failure. Today, I speak with my colleague, Jeanna Smialek, about whether we’re at the end of this banking crisis or the start of a new phase of financial pain. On Monday morning, the federal government took over a third failing bank, this time, First Republic. michael barbaroįrom “The New York Times,” I’m Michael Barbaro. Please review the episode audio before quoting from this transcript and email with any questions. While it has been reviewed by human transcribers, it may contain errors. This transcript was created using speech recognition software. Now What? The seizing of First Republic by regulators could signal the end of the banking crisis. Tech companies, however, are already making their AI tools available to billions of people, and incorporating them into apps and software many of us use every day.Transcript A Third Bank Implodes. But as with other matters of tech policy, the European Union is leading the way with the forthcoming AI Act, a set of rules meant to put guardrails on how AI can be used. When it comes to regulation, the Biden administration and Congress have signaled their intentions to do something. That's what Drake and The Weeknd's label, Universal Music Group, has invoked to get the song impersonating their voices pulled from streaming platforms. Texas and California have laws barring deepfakes targeting candidates for office.Ĭopyright law is also an option in some cases. Ten states already ban some kinds of deepfakes, mainly pornography. "It's going to be, probably, nonconsensual deepfake pornography or deepfakes of election candidates or state election workers in very specific contexts," he said. Laws and regulation will have to play a role, at least in some of the highest-risk areas, said Matthew Ferraro, an attorney at WilmerHale and an expert in legal issues around AI.

That's why those working on AI policy and safety say a mix of responses are needed. Open source AI models may not include watermarks. Detectors don't catch everything, and must constantly be updated as AI technology advances. There's not yet a universal standard for identifying real or fake content.

"Was it created by human? Was it created by a computer? When was it created? Where was it created?"īut all of these technical responses have shortcomings. The goal is to let people easily "identify what type of content this is," said Jeff McGregor, CEO of Truepic, a company working on digital content verification. Music Grimes invites fans to make songs with an AI-generated version of her voice
