The below is a masterful oil painting of the Sears Tower at dawn. You can't find it at the Art Institute, or the Met, or even through a Google search. It sits as copy 1 of 1 on my hard drive (and now in this post). The painting is a wholly original creation by a Machine Learning model called DALL-E, built by OpenAI, crafted from the prompt I gave it: "oil painting of downtown Chicago at sunrise." DALL-E and similar AI models (Midjourney is another popular one) have burst onto the scene, to much amazement and consternation. They are winning art contests. They are redefining what it means to be an artist or designer. And they are just getting started. It's not hard to imagine the above picture going alongside a fake, breaking news article about Ukraine bombing a Russian children's hospital. At a moment where trust in institutions are at an all-time low, where truth and facts are already up for debate, the slow, seemingly inexorable move towards AI-generated video content feels like a disaster waiting to happen. Another tool in the deepfake arsenal. This will be a major problem that requires the cooperation of governments and the tech industry to solve. We need smart public policy and technical tools that are just as smart on detecting AI-generated content as they are at creating it. We all know that government policy is reactive, so the onus will be on the private sector to find ways to mitigate the problem before any damage is done. Perhaps there is a way to incentivize the big players in this space to police each other. For the time being, it's important to stay aware of what's out there, and to recognize that there are always risks alongside the benefits of technological change. When it comes to AI, the stakes are high enough that we need to neutralize the risks before we can fully embrace the conveniences.