Fair AI labels?  An organization is checking whether generative AIs were fed legally

Geralt of Sanctuary

Fair AI labels? An organization is checking whether generative AIs were fed legally

AIs, checking, fair, Fed, generative, Labels, legally, organization

Artificial intelligences don't just invent things out of thin air.  You are trained with data coming from real people.  This is why it is so important to protect copyright.  (Symbol image, photo by Steve Johnson on Unsplash)

Artificial intelligences don’t just invent things out of thin air. You are trained with data coming from real people. This is why it is so important to protect copyright. (Symbol image, photo by Steve Johnson on Unsplash)

Learning AIs are often fed data whose copyrights have not been clarified or have simply been ignored. This can lead to legal problems if you want to use this generated material commercially.

These AIs also exploit other people’s intellectual property and create new content based on it. Artists, photographers and authors in particular see this as more than critical.

Until now, it has been almost impossible for outsiders to check the legality of AI models. Now the non-profit organization wants to Fairly Trained give other companies a label, what the cleanliness makes it understandable.

To do this, they check whether the respective companies have clarified for each of their content whether the author agrees to the AI ​​being fed their content.

An important step: This is the only way to better protect authors from exploitation

Sea The Verge Fairly Trained only issues its certification if data has been correctly obtained and protected. Developers who rely on the fair use argument will be excluded.

Fair Use: This is a term from the American copyright law. This involves using copyrighted material under certain circumstances without the consent of the authors.

The goal is actually to create a balance between the interests of authors and the interest of the public. Many developers therefore argue that fair use for the common good outweighs the issue and therefore ignore the copyrights of individuals.

In one Blog entry dated January 17, 2024, Fairly Trained writes that they were able to issue their new certificate for nine generative AIs, which include tested music, image and voice generators.

Those affected are:

  • Beatoven.AI
  • Boomy
  • BRIA AI
  • Part
  • LifeScore
  • Rightsify
  • Somms.ai
  • Soundful
  • tuney

Will the certificate prevail?

It remains to be seen whether this model will ultimately prevail in practice. AI models that have already been fed unexplained material would have to learn all over again in order to receive this certificate. This would mean effort, time and money for the relevant developers.

The example of Open AI (ChatGPT) shows quite clearly the priority given to the factor of fairness. The company has been criticized several times in the past Copyright infringement sued because they did not obtain the appropriate permission.

The people behind Fairly Trained don’t know to what extent this certificate solves the problems that generative AIs pose for creators in general.

Still, it gives us consumers a chance. This allows us to identify which companies have worked more cleanly and better protect the rights and ideas of artists and creators. In the end, we have the decisive lever of refusal to force companies like Open AI to rethink.

What do you think about these certificates? Can the copyright dispute be resolved in this way? Will big companies like OpenAI, who have probably pumped millions of gigabytes of copyrighted data through their systems, even give in? Would you avoid non-certified AI generators? Feel free to write us your thoughts about it in the comments.

Leave a Comment