Emilia Trevisani's interview

2026-04-27
|
interview

Emilia Trevisani is a visual artist based in Italy. A self-taught path through programming language, 3D scan, AI and generative art tools has led her to find a new voice to describe the world within and around her.

Primavera Gentile - Horizontal

You were a video journalist before becoming a digital artist. How did that transition happen?

Yes, I'm Italian, born in Milan, but I live in Tuscany - in Livorno exactly, a small coastal town. I used to be a journalist and video reporter. At the beginning my former editor sent me on assignment here, and then I decided to stay and become a freelancer from here.

My transition to digital art happened during the first lockdown, in 2020. As a side project, I used to make music videos for friends. I was making them by hand, using a technique called rotoscoping: you draw on the footage so that your drawing is in motion. I was doing it with ink, frame by frame - 24 frames per second. It took me four months to make a three minute video. I used to look online for other people doing that kind of work, and somehow I stumbled into 3D generative art. I saw things moving with sound and I thought: that's insane, I have to learn this. I dropped journalism. I dropped the hand animation. I started figuring out how this kind of work was possible.

Can you describe your artistic practice - what is your style, your favorite software?

I often combine different techniques: generative 3D, photogrammetry, AI, probably because those are the techniques I prefer and the ones through which I can best express what I feel. Sometimes it is through creating something audio-reactive, and sometimes it's through what I see in nature. Nature is something that really inspires me. What I find most exciting is the idea of translating what surrounds me into something deep, more personal. I try to construct art as I see it in my mind. I think being slightly neurodivergent has helped me, because a lot of it is based on synesthesia - I perceive sounds as forms and colors. Generative art is one of the few places where you can actually work that way.

I use TouchDesigner as my main software - it's a magic place to experiment. You can manipulate 3D objects, you can also transform elements based on sound. There's always something new to learn, and the more you know, the more you can do.

I really don't like to use already-made 3D objects. I prefer to make my own with photogrammetry. It's a bit of a tricky technique, it requires time because you have to take a lot of pictures of your subject, and you have to be lucky with the light. It's already a process within the process. In order to obtain your 3D object, there's a lot of work beforehand.

Narciso - Vertical

Can you tell us some more about how you integrate AI in your practice?

I experiment a lot with AI, but I try to do it in the most personal way that I can. I really don't love to use trained models. With AI, I would not be able to create an artwork using "prompt to image," as it wouldn't feel like it was mine. I mean, nothing against those who do that but it's not my cup of tea, sometimes it feels too much as generated from stolen Pinterest pictures.

I prefer to use it as much as I can as a tool, not as a collaborator. At the very beginning, I didn't know how to train my own model with my own data set. I started developing 3D AI works - a mix of what I was doing in TouchDesigner, customized with a Stable Diffusion API running directly inside TouchDesigner. Now I use other techniques, like building my own datasets from my own pictures and creating my own customized models with Stable Diffusion. It is something that I really prefer to do.

It's important even in our practice to ask ourselves, how is this possible? If in order to keep the servers cool somewhere, people are using tons of water, electricity and stuff… It's something that makes me uncomfortable. So, I'm aware to be somehow part of the problem, that's why I do what I can, trying to use as little energy as possible, using the means that I have.

You said that you customize your own model. What kinds of images do you put in that?

First of all, I use a Google collab page to train my models in order to have a faster GPU. I use Stable Diffusion, a very old model, so old that I'm probably the only one in the world using it right now. I keep a lot of pictures, it's probably a reflection from my previous activity as a videographer/photographer. So I have a lot of shots like street photography or thousands of different light experiments.

To create my models, I don't use very many images, it's always between roughly 90 and 130. It takes more or less two to three hours for me to generate my own customized model. I just put all things together in my data sets. My data sets are based on pictures of real life, things all around – animals, landscapes, close shots of someone… whatever touches me. I really focus more on the light, colors and the feeling of the picture.

Hidden Nature

I've been noticing, speaking to artists, how much the digital artists help each other out, how much they teach each other. What's your experience of this community?

Touch designer is a magic place to experiment, it's a very democratic software in my opinion because it gives you the opportunity to do amazing things even with a straightforward laptop. The community helps a lot, it's important because without this very open and friendly community I couldn't have learnt as fast. There are many YouTube tutorials, you can learn a lot with your own computer and the more you know, the more you can do.

It's happened to me to share my project files with other people or to receive very important tips from others. We are really close, even if we almost never met in real life. I had the chance to meet only one guy so far, an Indian artist, because we had a collaboration that was displayed in Arles and so we met there. Maybe this teamwork is something that doesn't happen in traditional art. Even journalists are very secretive, often they don't want to share, or help each other much even if of course there are some exceptions. So for me, it was really great to see that there was this huge friendly generous community out there.

Aloe - Horizontal

You create your own 3D objects through photogrammetry rather than using pre-made ones. Why?

I started by working with TouchDesigner. After a few weeks, I realized that you could actually manipulate 3D objects that weren't just a sphere or a box… You could input your own 3D objects. But, I was so disappointed with the free 3D objects I found. Why do the plants look like plastic? I don't like them.

And so I thought: my god, I love spending time in nature. I couldn't do much at the time, but I had a balcony, I had my small plants… so I started with that. My partner, my cat, everything I had in my house. The first outcomes were awful, truly terrible. It took a lot to learn. The very first thing I scanned was a lighter, quite a small thing but I was so excited when I saw this lighter appear in 3D. I could manipulate it, move it all around, I was so happy.

I also tried many softwares to learn photogrammetry. I'm not sure if I remember the first one I used - but I know it took hours. There were no RGB colors, you just had this… white thing. You could apply texture later, but not the real texture. It was a bit tricky. Now, I'm using Reality Capture, which is amazing. It's from Epic Games and it's much easier, and it makes it very fun to construct your own 3D stuff. You even learn a lot about photography because photogrammetry has its own rules about lightning.

I'd love to dive a little more into your photogrammetric and generative works. What's the process of creation - why do you pick, say, an aloe vera? When you pick a plant, do you know exactly what the end result will be, or do you let it come about naturally?

With Aloe Vera, I knew I wanted to play with the fact that it was something tall but tiny, without almost too many leaves. Aloe vera is connected to my life because it's a plant I have here at home, and it blooms every year. Every time it blooms, I take pictures of it.

So, I already did three different 3D scans of the aloe every year for every blooming these last three years. And I will keep doing it if it blooms again, of course. And this is why I did it. I really didn't know that for example the flower would have this kind of feedback. That was actually surprising for me. And so I decided to keep diving in with that 3D scan I made. It was a bit tricky to do on the balcony because of the light - it's not a big balcony.

Time Flies

What drew you to that rough, early AI aesthetic rather than the hyper-realistic output people chase today?

When I started working with AI, it was around the same time I started working with TouchDesigner, back in 2021. At the beginning, the only thing you could do on free apps on the internet was text-to-image, and it gave you very small images, like 250 by 250 pixels, all blurred, almost abstract. I loved them. It was the very beginning, so it looked like magic. I don't want to abandon that part of my practice. I want to keep experimenting with it. I'm not working with AI for a realistic aesthetic. I never did. I'm not aiming to make a hyperrealistic movie with AI. I want to see more of what AI did at the beginning. It's the most fascinating part of the path so far.

I'm very affected by this kind of aesthetic because it's lyrical, it's dreamy, it's funny, it's eerie and I like it. It's interesting because surreal art comes from the subconscious and dreams, and so it's almost like this is the machine's subconscious. I like this kind of rough, low-fi aesthetic. My 3D scans aren't perfect. Sometimes they're a bit broken, but I like it. It's like a memory: it isn't perfect. It's a bit blurry in some parts, and this is the point. The 3D scans I make are like 3D memories of a specific moment. Pictures are memories. It's all connected… I do feel like there is some kind of emotional and human aspect to it, despite the fact that it's just dots moving. It's very powerful.

When someone encounters your artwork, what is the feeling, the idea, the reaction that they hope that they have?

I hope they find something different from what I, the artist, was thinking of. If they feel something even of their own, something connected to themselves, I'll be glad. That's it, I hope that they find something that resonates somehow with their inner life.