Adobe Firefly 是否能让生成式 AI 技术不失人性?

Centific 编者
Will Adobe Firefly Make Generative AI More Responsible?

Generative AI continues to cause ripples of both concern and interest across the corporate workplace. A Gartner poll of HR professionals found that nearly half of firms are “formulating guidance” on AI use. Some firms have banned the use of the popular ChatGPT generative AI chatbot, citing concerns such as copyright infringement data privacy, bias, and accuracy. But generative AI is more than ChatGPT. And even amid the concerns, major companies such as Adobe are coming up with tools to help business use generative AI responsibly. A case in point: Adobe Firefly.

What Is Adobe Firefly?

Launched at the 2023 Adobe Summit, Firefly was created to help businesses and individuals generate high-quality images and text effects. Adobe Firefly will be integrated directly into Creative Cloud, Document Cloud, Experience Cloud, and Adobe Express workflows.

Adobe said in an announcement that with Adobe Firefly, everyone who creates content (regardless of their experience or skill) will be able to use their own words to generate content the way they dream it up, from images, audio, vectors, videos and 3D to creative ingredients, like brushes, color gradients and video transformations, with greater speed and ease than ever before. 

Why Is Firefly Significant?

Adobe has a history of releasing products that make the creation of visual content easier. So, embracing generative AI is a natural move for Adobe. But what really sets Firefly apart is that Adobe is trying to use generative AI responsibly. Adobe says that Firefly:

  • Is trained on legal-to-use content. Firefly is trained on licensed content, including Adobe Stock and other sources. This is huge. Using unlicensed images in generative AI has already gotten AI image tool maker Stable Diffusion in legal trouble with Getty.

  • Seeks to protect content creators. Another problem with generative AI is that content creators are getting ripped off. Their content is being used without their permission, and they are not getting compensated. Adobe will introduce "Do Not Train" tag for creators who do not want their content used in model training; the tag will remain associated with content wherever it is used, published or stored. In addition, Adobe says that Adobe’s intent is to build generative AI in a way that enables customers to monetize their talents. Adobe is developing a compensation model for Adobe Stock contributors and will share details once Adobe Firefly is out of beta.

Adobe also noted, in passing, that “[a]s other models are implemented, Adobe will continue to prioritize countering potential harmful bias.” This is an intriguing statement given how generative AI continues to be dogged by problems with bias – for instance, producing images of doctors consistently as men. If indeed Firefly can mitigate against bias (which remains to be seen), Adobe would be making a positive to address a recurring problem with generative AI. But no doubt this aspiration will be viewed with a very critical eye. Adobe would do well to be transparent about how exactly it will mitigate against harmful bias. In addition, it would be advisable for Adobe to it will advisable for Adobe to share how it will also manage sensitive issues such as safeguarding its images against content that is NSFW, violent, or otherwise inappropriate.

Key Takeaways

  • Generative AI is getting more pervasive in the workplace. Firefly demonstrates how quickly generative AI is making its way into the workplace even as businesses grapple with its implications.

  • Generative AI is bigger than OpenAI and ChatGPT. Adobe is the latest major company to show that generative AI is bigger than OpenAI and ChatGPT. Google and Microsoft have been incorporating generative AI into search, and Microsoft recently announced the launch of Copilot, an AI assistant for the Microsoft family of Office apps.

  • Responsible AI is a business problem, not a technology issue. More companies are weighing the costs of using AI irresponsibly and coming to terms with the need to act fairly. Adobe is at least communicating that it recognizes this reality in its public communications about Firefly.

We believe businesses should incorporate AI in a responsible way, which we refer to as Mindful AI.  At Centific, when we develop AI models for clients, we rely on globally crowdsourced resources who possess in-market subject matter expertise, mastery of 200+ languages, and insight into local forms of expressions such as emoji on different social apps. This helps us ensure that AI models are inclusive to as many cultures and as free of bias as possible.

There is no magic bullet or wand that will make AI more responsible and trustworthy. AI will always be evolving by its very nature. But Mindful AI takes the guesswork out of the process. Contact Centific to get started.