Google Launches SynthID Detector to Catch Cheaters in the Act

CN
Decrypt
Follow
9 hours ago

With deepfakes, misinformation, and AI-assisted cheating spreading online and in classrooms, Google DeepMind unveiled SynthID Detector on Tuesday. This new tool scans images, audio, video, and text for invisible watermarks embedded by Google’s growing suite of AI models.


Designed to work across multiple formats in one place, SynthID Detector aims to bring greater transparency by identifying AI-generated content created by Google’s AI, including the audio AIs NotebookLM, Lyria, and image generator Imagen, and highlighting the portions most likely to be watermarked.


“For text, SynthID looks at which words are going to be generated next, and changes the probability for suitable word choices that wouldn't affect the overall text quality and utility,” Google said in a demo presentation.


“If a passage contains more instances of preferred word choices, SynthID will detect that it's watermarked,” it added.


SynthID adjusts the probability scores of word choices during text generation, embedding an invisible watermark that doesn’t affect the meaning or readability of the output. This watermark can later be used to identify content produced by Google’s Gemini app or web tools.


Google first introduced SynthID watermarking in August 2023 as a tool to detect AI-generated images. With the launch of SynthID Detector, Google expanded this functionality to include audio, video, and text.


Currently, SynthID Detector is available in limited release and has a waitlist for journalists, educators, designers, and researchers to try out the program.



As generative AI tools become more widespread, educators are finding it increasingly difficult to determine whether a student’s work is original, even in assignments meant to reflect personal experiences.


Using AI to cheat


A recent report by New York Magazine highlighted this growing problem.


A technology ethics professor at Santa Clara University assigned a personal reflection essay, only to find that one student had used ChatGPT to complete it.


At the University of Arkansas at Little Rock, another professor discovered students relying on AI to write their course introduction essays and class goals.


Despite an increase in students using its AI model to cheat in class, OpenAI shut down its AI detection software in 2023, citing a low rate of accuracy.


"We recognize that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI-generated text classifiers in the classroom," OpenAI said at the time.


Compounding the issue of AI cheating are new tools like Cluely, an application designed to bypass AI detection software. Developed by former Columbia University student Roy Lee, Cluely circumvents AI detection on the desktop level.


Promoted as a way to cheat on exams and interviews, Lee raised $5.3 million to build out the application.


“It blew up after I posted a video of myself using it during an Amazon interview,” Lee previously told Decrypt. “While using it, I realized the user experience was really interesting—no one had explored this idea of a translucent screen overlay that sees your screen, hears your audio, and acts like a player two for your computer.”


Despite the promise of tools like SynthID, many current AI detection methods remain unreliable.


In October, a test of the leading AI detectors by Decrypt found that only two of the four leading AI detectors, Grammarly, Quillbot, GPTZero, and ZeroGPT, could determine if humans or AI wrote the U.S. Declaration of Independence, respectively.


Edited by Sebastian Sinclair


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX限时福利:体验金周边等你拿
链接:https://www.okx.com/zh-hans/join/aicoin20
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink