
Meta|Jul 25, 2025 03:42
Who secretly stole from you 💰? Everyone knows that the underlying logic of AI cannot be separated from feeding, and the fuel that feeds AI is naturally "data". Do you still remember when I was using liver NFT? For a period of time, I don't know who came up with the idea to create secondary creative images for the community, which gave a bunch of "failed art students" a second spring in their careers. As a result, many people even sought out art enthusiasts around them to work in studios. Prices range from 50-500 depending on the quality. At present, simply throwing in community related material images and finding AI to generate a bunch of secondary creations is time-consuming and effortless in an absolute sense.
Just now, I also came across the data released by @ campnetworkxyz regarding the sampling audit of DataMmp CommonPool. Only 0.1% of the sampled works were audited, and it was found that they contained thousands of unauthorized creator works. And these data are the core training materials for commercial AI models today. In other words, the big model earned money that should have belonged to you, but you didn't receive any coins.
Even creators didn't have a chance to struggle, because most models were trained between 2014 and 2022, and at that time, not only did they have some understanding of licensing and royalty mechanisms, but they also had little knowledge of AI. Perhaps for users in the Bitcoin community, the first lesson in royalty awareness comes from Opensea, where they only know what creators and platforms are earning when trading small images and calculating profits at the end 🤣
The current issue is that these models have already begun to be commercialized, from selling API interfaces, generating graphs, to selling subscriptions. Even companies like Stable Diffusion and Midjourney have already earned a lot of money. But the creators were quietly stealing everything.
What Camp @ campnetworkxyz is currently doing is building their own infrastructure for creators, allowing them to control their IP rights, protect their creations, and even get a share of the AI economy.
The mechanism of Camp includes
one ️⃣ Register the creator's work on the blockchain and clarify the creator's identity (Proof of Provence)
two ️⃣ Set usage rules and license terms, AI cannot be trained for free.
three ️⃣ If a model wants to use data, it must first obtain authorization and pay licensing fees.
four ️⃣ All usage records and revenue distribution are automatically executed, making revenue traceable.
I think this matter is actually quite meaningful because it's not just about "helping you prevent your work from being used to train AI", but about saying that copyright awareness, creative rights, and value capture mechanisms in the AI era all need to be redesigned. AI will never disappear, but it's time to change the way it acquires data.
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink