13:03 08/11/2019 | 7newstar.com
Total post : 1,355
Adobe users can look forward to a lot of AI assistance in the foreseeable future including tools for photography, animation, and audio editing
(Tech) On the photographic front, Project Light Right harnesses Adobe’s Sensei AI system to bring time- and date-appropriate lighting edits to images. Rather than applying a light source and shadows to an image based solely on a user-selected position on a 3D globe, Light Right uses AI and multiple images to deduce the sun’s position and add directionally appropriate light and shadows during edits. It can also use videos and Adobe Stock photos as inputs for its lighting calculations.
A more subtle application of AI is Project About Face, which can be used to detect edited images, generating an automated “Probability of Manipulation” and heatmap to show where edits have been made - including those that are too subtle for the human eye. About Face will likely be used to contribute to Adobe’s upcoming Content Authenticity program, which promises to offer photo and video viewers a sense of whether they’re seeing edited or unedited imagery, and might even be used to reverse the edits, revealing the original image.
Project All In promises to solve a classic photographer’s issue - the inability of a person standing behind the lens to be part of a group photo. All In uses Sensei to automatically blend two photos together, such that two people could take turns shooting the same background while one person stands in the image, resulting in a composite where both people stand together in the same environment. Alternately, All In can be used to exclude a second instance of a person who appears differentially in two shots.
Adobe also showed off several AI-aided animation tools. Like Samsung’s recent 3D scanning and AR avatar demos, Adobe’s Project Go Figure turns video of a real person’s movements into skeletal frame animations that can be exported for use by a virtual character. Project Pronto can add 3D objects to smartphone videos, such that the objects naturally follow the motion path of the video with AR-style blended live and digital results. And Project Sweet Talk promises to automate the animation of lip synchronization, converting recorded audio into a mesh that can be applied to flat images and animated characters.