Title. And this will also affect non-AI imitations of voices?

  • litchralee@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    18 hours ago

    In a nutshell, voices are not eligible for copyright protection under USA law, whose hegemony results in most of the world conforming to the same. The principal idea for copyright is that it only protects the rendition of some work or act. A writer’s manuscript, an artist’s early sketches, a software engineer’s source code, and a vocalist’s audition recording, are all things that imbue their creator with a valid copyright, but only for that particular product of their efforts.

    It is not permissible to copyright the idea of a space opera, nor a style of painting, nor an algorithm for a computer routine, nor one’s own voice. Basically, pure thoughts cannot be copyrighted, nor things which are insufficiently creative like a copyright on the number 42, nor natural traits or phenomenon.

    If we did change the law to allow the copyright of a human voice, then any satire or mockery that involves doing a good impression of someone speaking would suddenly be a copyright violation. This is nuts, because it would also deny someone else who – by no fault of their own – happens to have an identical voice. Would they just not be allowed to speak ever? Although intellectual property rights stem from the USA Constitution, so too do First Amendment speech rights, and the direct collision of the two would have strange and unusual contours.

    For when ideas can be protected by law, see patents. And for when voices can be protected, see soundmarks/trademarks and brand rights, the latter stemming from rights of association. Such protections generally only hold when the voice or sound in question is an artificial product, like the sound of Ronald McDonald, and the protection only limits direct competitors from using the voice or sound improperly; everyone else is free to do impressions if they want.

    So for the titular questions, the hypothesis posed simply will not occur under current law, and it’s hard to see how it would be practical if the law did permit it.

    • nimpnin@sopuli.xyz
      link
      fedilink
      arrow-up
      5
      ·
      13 hours ago

      whose hegemony results in most of the world conforming to the same

      Does it really? I know there are significant differences between US and EU copyright law, fair use comes to mind

    • Uli@sopuli.xyz
      link
      fedilink
      arrow-up
      2
      ·
      17 hours ago

      While you are correct about copyright on this subject, the more applicable topic here is Right of Publicity. It is state law in over half of US states, intended to protect the use of a person’s voice likeness.

      Essentially, if an imitation voice is used in such a way that it could cause confusion about whether it is really the imitated person, then it is illegal to use it in any commercial context. I understand that the question here was about non-commercial contexts, but that line can get blurry when social media views can create followings that then translate into commercial success. I am not a lawyer by any means, I’ve just been researching this for my own AI voices applications and want to protect myself from accidentally imitating anyone.

      For example, I need to be able to transform my voice into many other character voices, since I have so many lines to record it would be cost prohibitive to hire actors. The worst move would be to download a voice model of a known actor and use that directly. Very sketchy, both legally and ethically.

      So, the next best move is to find three or four voice models and merge them into one with combined tensor data from all three. But I was still quite concerned about this, worried that in the many thousands of voice lines I make, some recognizable actor voices would slip through.

      So, I came up with the following pattern that I feel much more comfortable with, both legally and ethically:

      I download several voice models that have some quality in common - an accent, vocal timbre, or style of speaking. Then, I merge them to make a model that focuses on that trait. And I record myself saying a line with a lot of phoneme variety, trying to match the vocal trait as close as possible. Then, that merged vocal trait model is used to transform the recording of my voice into the new voice. Then, I use this transformed recording to train a new voice model. And I take a few of these generalized models (e.g. an accent, a tone, a speaking style) and use them to create the final character voice, which should in theory be far removed from any of the actors who contributed.

      I’m not sure what OP’s use case is, if it’s truly non-commercial, this method might be overkill. But if anyone wants to try using AI voices in projects but is nervous about legal ramifications, this is one way to try to insulate created voices from the specific training data. YMMV.

  • skvlp@lemmy.wtf
    link
    fedilink
    arrow-up
    2
    ·
    13 hours ago

    It is hygge to read about Denmark doing good work in this area.

    The proposal establishes legal definitions for unauthorized digital reproductions, specifically targeting “very realistic digital representation of a person, including their appearance and voice.”

    Creative works such as parody and satirical content remain exempt from these restrictions.

    https://www.fastcompany.com/91360589/denmark-copyright-yourself-it-might-be-the-only-way-to-stop-deepfakes

    • ryujin470@fedia.ioOP
      link
      fedilink
      arrow-up
      1
      ·
      8 hours ago

      Does this only affect digital reproductions? Because it’s sometimes easy to reproduce someone’s else voice especially if you are a voice actor. And what if you happen to have a voice almost identical to another person?

      • skvlp@lemmy.wtf
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        Does this only affect digital reproductions?

        The article seems focused on the digital angle, but I’m not sure.

        Because it’s sometimes easy to reproduce someone’s else voice especially if you are a voice actor.

        Is that widespread? My impression is that misrepresentation through fake likeness has exploded from AI generated deepfakes?

        And what if you happen to have a voice almost identical to another person?

        Then you have the right to your voice and they have the right to their voice. If neither uses the likeness for misrepresentation or other nefarious means then nothing will come of it. If either do there are probably already possibilities within current legal frameworks to prosecute.

        • ryujin470@fedia.ioOP
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          Is that widespread? My impression is that misrepresentation through fake likeness has exploded from AI generated deepfakes?

          Voice imitations without AI are possible if you are skilled enough, but using AI is much easier because you can use recordings of people’s voices directly rather than trying to imitate the voices yourself.