28.08.2024

8 min.

Cocoon AI: Content for Sellers
How we developed Cocoon AI to help sellers create AI content for marketplaces
This summer, we completed the development of Cocoon AI from scratch — a web and mobile application that uses neural network algorithms. The technology generates realistic photos of clothing and products for listings, which sellers then use on marketplaces, in retail and wholesale stores, and on social media.
How does it work? The user uploads photos of clothing on a mannequin, and the output is the same clothing but on pre-designed AI models.
Idea and Concept
The development of the app began with the client's idea to create a tool that simplifies content creation for sellers on marketplaces. The inspiration came from the challenges small brands and sellers face when producing high-quality product images. Creating compelling content requires a lot of time and resources. Therefore, the main goal of the app is to reduce production time and shooting costs while minimizing human error. Now, there’s no need to hire professional photographers, models, or rent a photo studio. The Cocoon app completely replaces them.
Examples of early stage algorithm performance
The client didn’t come empty-handed; they already had a logo and a design concept for the app.
Our team was tasked with completing the entire development cycle: from creating UX prototypes and designs to implementing the frontend and backend. Additionally, we worked closely with the client’s teams, who were responsible for developing the API and their custom neural network.
Development Process
1. Research and UX Prototyping
Since the client only had an idea, we conducted a full market analysis, studied similar AI-powered apps, and analyzed user feedback. This helped us identify common UX mistakes, which we took into account during the development.

The Project Manager created a project timeline to monitor closely deadlines and keep the client informed about any potential delays in the release. Such a plan brings clarity not only to the client but also to the entire team, helping everyone stay focused on each stage and see the bigger picture.
After a thorough analysis, the team began creating wireframes and mapping out the user journey to make the product intuitive and user-friendly. We went through multiple iterations and calls with the teams, with the interface being continuously refined and adjusted after each meeting.

As a result, we developed the following prototype for the website and app:
The client noted how deeply we involved in the project.
2. Design
Before moving on to the full design and creating all the screens, we always start with a design concept, where we create a few screens for both the desktop and mobile versions. This approach helps avoid spending too much time on revisions if the client requests changes. It also gives the client an early idea of how everything will look. And only after the design concept is approved, the designer starts creating layouts for the web version and mobile application.
This project was very large, with many sprints and future development ideas, so it was crucial for us to compile a UI kit and establish a design system. To create this, the designer used only custom solutions to ensure the project could be easily adapted to the client's needs.
During the design development, we also held iterative calls with all teams to adjust the overall direction and assist with API development, as the backend was being written in parallel. Obviously, we also coordinated all stages with the client.
3. Development
The development of the website and mobile app proceeded simultaneously. It was crucial to release the app on both platforms: the App Store and Google Play Store, so we chose the Flutter programming language. Flutter allows us to write a universal app for both Android and iOS, ensuring a consistent appearance across platforms.

The project integrated two types of payments: for Android and the web version, payments were handled through YooKassa, while for iOS, we used in-app purchases — native embedded payments. Together with the client's team, we planned and designed the payment interaction between the app and the backend.
In addition, we developed our own server (backend). It stores user projects, their cards, and generations. For a single photo, a user can create multiple generations with different parameters, so all this information needs to be stored. Our backend also handles and syncs all payment statuses: who paid for what, how many generations were purchased, when the subscription expires, and so on.

Since we also worked in conjunction with the backend managed by the client's team, our task was to integrate with them, process and send requests, receive and cache responses.
4. Testing and Improvement
After creating the MVP (Minimum Viable Product), we entered the testing phase, during which bugs were identified and fixed, and the performance of the neural network algorithms was improved.

The client provided a group of sellers who tested the product in real-world conditions: both the web version and the mobile app. We took their feedback into account and used it to create a backlog for subsequent sprints aimed at product enhancement.
After the initial testing, we also updated the FAQ page, adding answers to the most common questions from users and testers. As a result, support requests were nearly halved.
Additionally, we developed a design guide on how to properly photograph products to achieve the highest quality results. This guide is sent to users by the support team.
Technological Foundation
At the core of Cocoon AI is a neural network trained on vast datasets, including product images taken under various conditions.
Neural Network Training: Significant effort was devoted to collecting and annotating data so that the neural network could learn from diverse examples. We were also assisted by a group of testers who uploaded real photos of their products and processed them through the service.
Image Generation: The algorithms use computer vision and machine learning methods to analyze the uploaded image and create realistic pictures. The neural network can add lighting, shadows, and even adjust the background to fit the product.
Examples of item generation
Error Correction: One of the main challenges for the team was to address one of the most common issues with neural networks —incorrect finger generation. Instead of five fingers, the neural network might produce ten, or the shape of the hands could be distorted. Considerable amount of time was dedicated to solving these problems.
Accurate Composition and Lighting: The developers succeeded in moving away from the standard method of overlaying a cut-out object onto a background. Now, a true composition is built around the object, resulting in a professional-looking photo. The neural network doesn’t just place the object onto a background; it adjusts the lighting, creates shadows, and integrates the object in a way that makes it look realistic. The result is akin to a professional photoshoot.
User generation result
Key Features of the App
Speed: Image generation takes just a few seconds, allowing users to get results instantly. Users can also submit multiple photos for generation simultaneously.
Prompt-Based Generation from Text Descriptions: Users can generate any background for an object, in any location, with any preferences, using text prompts. The neural network will also add lighting, shadows, and adjust the background to fit the product, ensuring the image is as realistic and vivid as possible.
Unified Account: Users can start creating in the app and finish them on the website, or vice versa. All data syncs between the app and the web version.
Support: The built-in user support system helps resolve issues in real-time, collects feedback, and uses it to improve the app.
Diverse Models: Users have access to over 40 models of various ages and ethnicities, including children. This is noteworthy because neural networks typically struggle with working well with children.
In the future, we plan to add the ability to customize models by selecting their hair, eye, and even skin colors. This will broaden the possibilities for creating cards for any product and audience.
Cropping: When exporting images, users will be able to choose a format for cropping the image or save the original version.
Effects: This feature is still in development. Once implemented, users will be able to overlay text descriptions, stickers, and images onto the generated cards. The designer is working on templates and selecting stylish fonts to draw attention to the product.
Project Failures
1
DDoS attack: Since the project required integration with an API developed by the client's team, we faced many challenges. For example, shortly before the planned release, the client's servers were hit by DDoS attacks, causing the backend to be down for a while and putting the release timeline at risk. After the servers were restored, we had to significantly speed up our work and put in twice the effort to meet the release deadline.
2
App Store Rejections: The App Store initially rejected the app due to sanctions imposed on Russia. This meant that a paid app couldn't be listed on a Russian account; only free apps were allowed, which was not suitable for us since our digital product includes subscriptions. After several joint calls with all teams, we decided that the client would open a second account in Armenia, implement In-App Purchases, and provide an option to switch from the app to the website for payment if necessary. As a result, after a few rejections and revisions, the App Store approved the app.
Result
The client was very pleased with our team's work. After the release, we received a lot of positive feedback from sellers.
Post-Release Afterparty
After the release of the app and website, we assisted the client in participating in a major Ozone conference, which attracted the client’s potential target audience.

For the conference, we created two animated videos and helped prepare the promoters who would present our product to visitors. We provided a comprehensive product guide, demonstrated its key features and benefits, and also gave test access to the app so the promoters could familiarize themselves with the product in practice.
Our team also participated in designing the conference booth. Here’s the approved design:
The conference was a success, and the product aroused a lot of interest. People actively registered and paid for subscriptions.
Thank you for reading the article to the end and joining us on the journey of creating the Cocoon AI app!?