This API detects unsafe or explicit content in images, returning labels with confidence scores. It’s fast, lightweight, and ideal for moderating user-generated content at scale — ensuring safety without compromising performance