Case Study

AI Portrait Generator: Rapidly Prototyping an Emotion-Driven Art Experience with Scalable UX Layers

My Role:

  • Concept ideation & visual direction
  • Full UX flow (lo-fi, fast-executed MVP)
  • Tech stack integration
  • User testing & iterative UX updates
  • Positioning for future monetization
AI Portrait showcase done by exhibition guests

1. Artistic intent & Strategic purpose

This project wasn’t just an experiment, it was designed to support my upcoming exhibition and generate attention within the contemporary art community. By blending AI with my traditional style, I aimed to create something interactive, shareable, and a little bit provocative: something that could catch fire in the art world and start conversations.

The web app allows visitors to upload a selfie and receive a portrait generated by an AI model trained to mimic the textures, color palettes, and mood of my original paintings. That experience sits side-by-side with physical oil paintings in the gallery space, prompting viewers to compare: Does the emotional impact change when the artist is replaced by an algorithm?

While I wasn’t focused on monetizing the app experience itself (yet), the strategic goal was clear:
1. Create buzz.
2. Draw visitors into the broader artistic narrative.
3. Encourage interest in original works and fine art prints, available for purchase as a natural extension of the digital interaction.

This app acted as a creative bridge: from virtual experience to real-world value.

2. Discovery & Concept Testing

I come from a UX and visual design background, not engineering, so going into this, my coding skills were very basic. But thanks to modern tools and a bit of stubborn curiosity, I managed to build the full experience myself.

Here’s how:

  • WordPress + Elementor — I used my existing site as a base, with plugin support for layout and UI
  • Forminator — I used a ready-made contact form plugin that allowed me to fetch uploaded files easily. It wasn’t the ideal long-term solution (I ran into some limitations later), but given the budget constraints, I prioritized free tools that could get the job done fast.
  • Custom WordPress Plugin — I built my first-ever plugin to handle image processing and connect the frontend with backend logic
  • Replicate API — Used to generate portraits from uploaded selfies via AI model
  • ChatGPT — Helped generate initial code snippets (PHP + JS), but I had to troubleshoot, modify, and stitch it all together manually
  • Custom PHP & JS — I wrote and integrated the backend logic to handle file uploads, send/receive API calls, and display generated images

I didn’t just copy-paste AI-generated code. I debugged, broke things, fixed them, and in the process actually learned how code works.

This project became an accidental crash course in backend development. It showed me that, even with low engineering skills, it’s now possible to ship a real digital experience as long as you’re willing to experiment, fail fast, and use Google and ChatGPT.

Schema explaining my backend work

3. UX Approach: Build fast, dirty, evolve after

This was not a classical UX process, and intentionally so.

I focused on rapid delivery, prioritizing functionality over polish. I designed and deployed the first version with minimal UX structure: no Figma, no formal flows. Just quick implementation to test feasibility, the AI model and interaction.

Once the MVP was stable, I shared it with 5 people to observe their behavior:

  • How they navigated the flow
  • Where confusion or hesitation occurred
  • What emotional reactions (if any) surfaced.

I gathered this feedback informally — watching and asking. That simple loop gave me powerful signals to refine usability.

This was UX as a responsive layer : not a prerequisite for launch.

As the product matures, especially with plans to introduce monetization, I’ll shift toward solving specific user problems (e.g., usability, clarity, image retention, personalization, sharing, etc.). UX will evolve from reactive to strategic.

4. Key Challenges

Working with AI + ChatGPT

Using ChatGPT to write backend logic felt like working with a very junior developer. It was fast but chaotic: code was inconsistent, easily broken, and needed constant debugging.

But for someone with low coding experience, it gave me the scaffolding I needed.

Lesson: Speed wins at concept stage. It’s okay to wrestle with messy tools if they help you ship.


Debugging Everything

Small changes broke everything. I got a real taste of developer frustration: infinite debugging cycles, error stack traces, and the fragile nature of API calls.


Model Training Limitations

I experimented with training my own LoRA model using a small dataset of my original paintings. Unfortunately, the results weren’t usable — likely due to insufficient data and lack of style consistency across the training set.

In the end, I switched to a pre-trained Stable Diffusion model, using img2img with fine-tuned parameters to steer results toward my desired aesthetic. This trade-off allowed me to deliver a more stable user experience within time and resource constraints.


UX Trade-offs

I deliberately ignored classical UX practices in round one. The first version lacked error handling, loading states, and onboarding, and that was fine. Once I confirmed that the core idea worked, I layered in better feedback mechanisms based on real user behavior.

5. Outcomes & Strategic Evolution

This project was a fast-moving, creatively-driven experiment that bridged technology, art, and user experience — and it delivered tangible results:

  • Fully functional MVP, live and demo-ready: https://manjuna.com/upload/
  • UX evolved through direct observation and rapid feedback loops
  • Exhibition-ready installation, provoking reflection on AI-generated creativity
  • Strategic foundation laid for future monetization via artwork and print sales


What’s Next

As I continue refining the experience, the focus will shift from experimentation to value-driven UX — particularly for repeat users and collectors. Key roadmap goals include:

  • Enhancing UX polish for smoother flows and emotional engagement
  • Adding sharing and saving features for generated portraits
  • Exploring pricing models for premium art outputs (e.g., framed prints, originals)
  • Introducing mobile optimization and accessibility improvements


Final Reflection

This project reshaped how I think about building digital experiences: especially as a UX professional. It was a reminder that shipping fast, learning through use, and adapting based on real behavior can be more powerful than any polished prototype.

UX isn’t just wireframes and research: it’s watching, listening, and layering purpose onto something that already exists.

In a world where creation tools are becoming more accessible, the skill isn’t just in planning: it’s in responding, refining, and evolving quickly.

6. want to collaborate?

I’m always open to exploring creative tech, fast delivery experiments, and AI x UX collaborations. Let’s talk!

Please feel free to check my art protfolio: https://manjuna.com/artist/