The recent popularity — and hand-wringing — around ChatGPT have brought artificial intelligence content creation to the forefront of the public conversation, and AI art is now receiving its fair share of heed. This new wave of digital art is rich with both stimulating visuals and, naturally, its share of controversy.
Some old-school visual artists say that using AI is sacrilegious to art’s nature, seeing the craft as the rawest form of pure human (and human only) expression. Notably, Rob Biddulph, an award-winning children’s book author and illustrator, is one such artist.
“[Generated art] is the exact opposite of what I believe art to be,” Biddulph told The Guardian. “Fundamentally, I have always felt that art is all about translating something that you feel internally into something that exists externally.”
Other artists are just more concerned with ensuring that credit is given where credit is due, such as the digital artist Greg Rutkowski. Famous for his illustrations for sensations like Dungeons and Dragons, Sony’s Horizon Forbidden West, and Magic: The Gathering, Rutwoski is known for his characteristic style of dreamy, oil painting-like depictions of fire-breathing dragons attacking knights, warrior goddesses, and mythical creatures, backed by dark landscapes of mountains and fire. He utilizes tools such as Photoshop to create his digital images, and much of his art is posted online. However, his art is beginning to get overshadowed by AI-generated copies of the works.
Lately, new platforms have been popping up, mostly for entertainment rather than artistic-based creation, called “auto-generated image models.” These models, such as Stable Diffusion, DALL-E, and Midjourney, are fed millions of images scraped from artists’ collections—without proper credit. Such AI models are able to create pictures when given a prompt of a few words, One prompt of “fantasy scene in Rutkowski style” can produce hundreds of images instantly.
In 2022, Rutkowski’s name was searched around 100,000 times in a singular auto-generated image model—more than Picasso and Michelangelo combined. Users were certainly deeply entertained while discovering new digital dragons, created by their own prompts. Though Rutkowski said he was interested in the potential for new audiences directed toward his work via these AI models, he quickly became concerned once he searched his own name and could only find altered copies.
“What about, in a year? I probably won’t be able to find my own work out there because [the internet] will be flooded with AI art,” he commented to MIT Technology Review.
And Rutkowski is not the only one concerned. There’s an entire coalition of artists engaging in proactive action to get proper credit for their work, creating websites such as Content Authenticity Initiative, giving creators the tools to watermark digital pieces, and Have I Been Trained, a database that provides artists with an avenue to see if their work was part of the 5.6 billion images used to train these entertainment-based auto-generating image models, namely Stable Diffusion and Midjourney.
Of course, with every new step in technology, there must be checks and balances. In the same interview with MIT, Rutkowski says he doesn’t blame users of the new models making art with his name, and though “it’s a cool experiment..for [him] and many other artists, it’s starting to look like a threat to our careers.”
The legit AI artists are using their own collective databases or images available to all under the public domain. These are innovators and creators, inspired to push the boundaries of technology visually while intertwining it with humanity, and exploring themes like the relationship between humans and technology, consciousness, perception, nature, and science. These, at their best, are an ode to the past and a revitalization of history, all condensed into the present moment. Here are four artists who are engaging in groundbreaking creations and exploring new conceptual relationships within the infinite realm of technology.
Goodnight, Sweet AI: Refik Anadol

https://ocula.com/art-galleries/konig-gallery/exhibitions/machine-hallucinations-nature-dreams/
If you know about AI art, you probably know Refik Anadol. Currently based in L.A., This Turkish-American artist’s work has been groundbreaking and he’s highly influential in the world of digital art. “Quantum hyperspace” and “an architectural exhibition of synthetic reality experiments,” have been descriptors of his work, and it’s clear Anadol has created an entirely revolutionary mode of expression. For one of his most famous pieces, MACHINE HALLUCINATIONS: NATURE DREAMS he combined 300 million photographs of nature (representing the largest ever gathered with artistic intent) and used it to train a GAN AI algorithm. The resulting imagery only exists within the machine, so we can actually view its “dreams” manifested from the pigments and shapes of nature. The imagery was finalized as a 20-minute-long video loop to be viewed on an LED screen meant to be an experience for exploring consciousness, humanity, and nature. With this project, Anadol teamed up with the Google AI Quantum team to access the most state-of-the-art computing research projects. This exhibition was designed specifically for the König Galerie in Berlin, which is a brutalist-style venue inside the church of St. Agnes. Anadol’s website states that the studio will assist the future buyer with a customer computer and software, installation instructions, and backup files, but the LED screen is not included.
Google Can Dance: Wayne McGregor

https://waynemcgregor.com/productions/living-archive
Unbound to two dimensions, Wayne McGregor is a world-renowned British choreographer and director. Holding countless awards for the most prestigious ballet companies around the world, McGregor has created pieces inspired by classic minds such as Sophocles, Dante, and Slavinsky, while also taking influence from concepts such as Slavic wedding rituals or preserving paintings through WWII. However, titled Living Archive: An AI Performance Experiment, the choreography for this 30-minute-long dance exhibition was not influenced by any concepts of historical humanity, but rather it tuned into the technological undercurrent of modernity. Commissioned by the Music Center and L.A. Philharmonic and developed in collaboration with Google Arts & Culture Lab, the Living Archive is an AI tool that “unleashed the creative potential stored at a molecular level within former works, amplifying the spectrum of possibility for choreographic decision-making and bringing dancers of the present into contact with traces of their predecessors.” The system was fed hundreds of hours of videos from McGregor and his company’s work; from this content, it was able to recognize movement phases and suggest multiple original phases that could follow. Artist and filmmaker Ben Kullen-Williams was a crucial part of the collaborative project, working with AI to make an abstract video to frame the dancers. The resulting exhibition was a “marriage of technology,” said McGregor. However, not only was this project a performance but later it was opened to the public to become an interactive platform for bodily movement. Users can now create their own completely unique choreography with the help of artificial intelligence.
Beksinski Gets a Computer: Mario Klingemann

https://www.artsy.net/artwork/mario-klingemann-neural-decay-4257
A former Google Art and Culture Artist in Residence, German Mario Klingemann came into the spotlight a few years ago when his AI artwork Memories of Passerby I was bought in a traditional auction house for $50,000—unprecedented due to the traditional nature of the venue. His work has been shown in The Met, the MoMA, the Mediacity Biennale Seoul, and the Centre Pompidou Paris. His self-proclaimed “favorite tools” are “neural networks, code and algorithms.” With pieces reminiscent of Beksinski, Klingemann creates AI portraits of warped faces and peering eyes that entrance the viewer with an eerie inhumane gaze. His series of six prints, titled Neural Decay, showcases his patented style of “neuralism.” To create this, he trained artificial neural networks, such as generative adversarial with encyclopedia portraits from the late 19th century, transforming male faces into the female, then followed by an influence of doll-like features. He then utilizes “trans-hancement models” to add life-like textures inspired by organic decay before being printed on stainless steel. This series culminates the technological advances from the past decade to reminisce upon faces of the past centuries in a haunting exploration of artificial portraiture.
Follow the Leader: Sougwen Chung

https://sougwen.com/project/florarearingagriculturalnetwork
Born in China, raised in Canada, and currently living in London, artist Sougwen Chung is considered a multi-disciplined pioneer in the realm of human-machine coordination. She has spoken at conferences spanning many fields, including the Global Art Forum, the Cannes Lions International Festival of Creativity, and the World Science Festival. Coining the phrases “mark made by hand” and “mark made by machine,” Chung observes the interactions between humans and technology. Using custom-made robots that mimic her actions, she creates a feedback loop in conjunction with her own actions to create collaborative artwork. In her project series Studies for a Flora Rearing Agricultural Network (F.R.A.N.), she explores the “reciprocity between the technological and natural.” The biofeedback-based paintings are characterized by flourishing organic forms in vivid hues of purples, blues, and grays, stunning against the jet-black background. Poetically scientific titles such as “Language of Pathogens” and “Mutations of Presence” are an ode to the relationship between technology and the natural world.
Stay on top of the latest in L.A. news, food, and culture. Sign up for our newsletters today.