Advertisment

Adobe partners with ethical hackers to build safer AI tools

In a recent blog announcement, Adobe unveiled an expansion of its bug bounty program, aimed at fortifying the security of its AI tools, with a particular focus on Content Credentials and Firefly.

author-image
Social Samosa
New Update
Adobe ethical hackers

Adobe has announced an expansion of its bug bounty program, aimed at incentivizing security researchers to discover and responsibly disclose bugs specific to their implementations of Content Credentials and Adobe Firefly, promoting safe AI usage.

"With an open dialogue, we aim to encourage fresh ideas and perspectives while providing transparency and building trust," stated Adobe in a recent blog post.

Content Credentials, based on the C2PA open standard, serve as tamper-evident metadata attached to digital content, offering transparency about their creation and editing process. They are currently integrated across various Adobe applications, including Adobe Firefly, Photoshop, and Lightroom.

"We're crowdsourcing security testing efforts for Content Credentials to bolster Adobe's implementation against traditional risks and unique considerations associated with the provenance tool, such as potential abuse by incorrectly attaching them to the wrong asset," the blog added.

The importance of understanding and mitigating potential risks arising from AI usage was also highlighted in the blog, along with Adobe's commitment to advancing safe, secure, and trustworthy AI, including transparency about the capabilities and limitations of large language models (LLMs).

"Adobe has long focused on establishing a robust cybersecurity foundation through collaboration, talented professionals, partnerships, leading-edge capabilities, and deep engineering prowess. We prioritize research and collaboration with the broader industry to responsibly develop and deploy AI," read an excerpt from the blog.

The company has been actively engaging with partners, standards organizations, and security researchers to enhance product security, with the expansion of the bug bounty program being another step in that direction. Dana Rao, executive vice president, general counsel, and chief trust officer at Adobe, emphasized the critical role of security researchers in enhancing security and combating misinformation.

"We're committed to working with the broader industry to strengthen our Content Credentials implementation in Adobe Firefly and other flagship products, bringing important issues to the forefront and encouraging the development of responsible AI solutions," Rao added.

“Building safe and secure AI products starts by engaging experts who know the most about this technology’s risks. The global ethical hacker community helps organizations not only identify weaknesses in generative AI but also define what those risks are,” said Dane Sherrets, senior solutions Architect at HackerOne. “We commend Adobe for proactively engaging with the community, responsible AI starts with responsible product owners.” 

Adobe AI tools Adobe Firefly Ethical hackers Content Credentials