Adobe is Making Some Big AI Upgrades, Let’s Dive In

Yesterday, at their Adobe Max event, Adobe announced some pretty big upgrades to their AI capabilities. Eleven (!) new AI-powered prototype tools and features were showcased under the “Sneak” program, including Project Stardust and Project Primrose. 

Sneak allows Adobe to show off their innovative technologies currently in development, but also to gauge public interest in their project and future capabilities. Of course, not everything announced at Sneak will make it to production, but it gives the public an idea of what one of the biggest creative platform companies is focused on and capable of.

For example, Adobe released their AI-powered Content-Aware Fill tool in Photoshop and After Effects earlier this year, which was announced at a previous Sneak event. Content-Aware Fill allows you to draw a circle around any part of an image and have AI fill in the blanks for you. It’s incredibly useful if you need to increase the size of an image you’re working on, but it has mixed results so far, and I haven’t personally found it to be all that useful, yet.

However, Adobe announced a pretty big update to the Firefly engine that powers that feature, and its expansion to Illustrator and Adobe Express. Firefly already enables users to turn text prompts into pictures on their website and in Adobe's Photoshop image-editing software, but a second-generation AI model will provide more detail and better image quality. 

Ok back to this year’s event. Here are some of the coolest things Adobe teased (they seem to be big fans of alliteration this year): 

Project Stardust:

An “object-aware editing engine” that automatically identifies individual objects in photographs so they can be easily moved, changed, or deleted. The tool allows users to grab and move sections around without manually “cutting” them out with a lasso tool. The background behind removed objects is automatically filled in to match its surroundings, and you can even generate entirely new assets to place into the image, just like Photoshop’s text-to-image generator.

Project Stardust even identifies an object’s shadow so that it can be moved or deleted alongside other edits.

Image: Adobe



Project Primrose:

An interactive dress (yes, an IRL dress) that demonstrates the potential use of “flexible textile displays,” allowing the wearer to display patterns and images on their body like a programmable screen. Adobe has technically already teased this “smart display fabric” technology before, but we’ve only previously seen it applied to a flat canvas and a small handbag. 

A strapless dress on a headless mannequin. The dress's fabric and pattern change rapidly.

Image: Adobe


Side note: I’ve personally investigated other companies who are developing similar technologies, but for use in the Metaverse. Last year, I went to a fashion pop-up for a company called Zero10 and covered the experience for Google. Within Zero10’s mobile app, users could peruse a virtual rack of 3D clothing, try on pieces in real time, and purchase items as NFTs to wear within the metaverse. Cool idea, but I haven’t seen it go anywhere since.


Project Fast Fill:

Brings generative AI to video, letting editors delete people from the background of one video frame and then extend that change to the entire video clip. It can also track a designated area so a change can follow subjects. Adobe showed one example adding a necktie to a walking person's outfit and another with a new foam pattern on the sloshing surface of a cup of latte.

Project Poseable:

Lets people convert a 2D photo of a person into a 3D model in that same posture, reposition the limbs and joints of that skeletal model into a preferred pose, and even build a new character style for the model with a text prompt. 


Project Draw & Delight:

Turn a crude sketch accompanied by a text label into a cartoon. You can add extra material with new sketches and prompts, and it'll modify the existing design, preserving the character you've already chosen.


You may be saying to yourself, “these are cool but I don’t really see myself using them on the daily.” Well, lucky for you I have a list of some of the most useful features Adobe announced as well…

Project See Through:

This feature seems to actually be the most useful to the largest number of users so far. Project See Through helps users automatically clean up photos and get rid of things like light flares, reflections from glass, and other visual obstructions. This particular feature would be a great addition to Photoshop or Adobe Express, and I could even see Google developing something similar for their Pixel phone cameras or Google Photos.

Image: Adobe

Project Res-Up:

Uses “diffusion-based upsampling technology to generate new data based on data it's been trained on previously.” Adobe showed a demo of the project to The Verge as an exclusive, and the results are pretty impressive. In one example, it showed a video from a movie made in 1947 and upscaled it from 480 by 360 to 1,280 by 960 resolution, an increase of 675%. The result is a noticeably sharper image that doesn't look fake despite an AI adding details. This feature alone will completely eliminate the need for about 85% of posts in the r/PhotoshopRequest and r/PhotoRestore subreddits, so long!

[Left: original, Right: upscaled] Running this clip from The Red House (1947) through Project Res-Up removes most of the blur and makes details like the character’s hair and eyes much sharper.

Image: The Red House (1947) / United Artists / Adobe

[Left: original, Right: upscaled] Additional texture has been applied to this baby elephant to make the upscaled footage appear more natural and lifelike.

Image: Adobe


Follow along with Gitsul Group to see the latest in tech, marketing, and branding.

Previous
Previous

LinkedIn and AI: A Match Made in… Purgatory.

Next
Next

Getting Started with Digital Marketing as a Small Business