As a medical doctor I extensively use digital voice recorders to document my work. My secretary does the transcription. As a cost saving measure the process is soon intended to be replaced by AI-powered transcription, trained on each doctor’s voice. As I understand it the model created is not being stored locally and I have no control over it what so ever.

I see many dangers as the data model is trained on biometric data and possibly could be used to recreate my voice. Of course I understand that there probably are other recordings on the Internet of me, enough to recreate my voice, but that’s beside the point. Also the question is about educating them, not a legal one.

How do I present my case? I’m not willing to use a non local AI transcribing my voice. I don’t want to be percieved as a paranoid nut case. Preferravly I want my bosses and collegues to understand the privacy concerns and dangers of using a “cloud sollution”. Unfortunately thay are totally ignorant to the field of technology and the explanation/examples need to translate to the lay person.

  • I don’t where you live. But almost all of bigtech US cloud is problematic (Read: Illegal to use) for storing or processing of Personal information according to the GDPR if you’re based in the EU. Don’t know about HIPPA and other non-EU legislation. But almost all cloudservices use US bigtech as a subprocessor under the hood. Which means that the use of AI and cloud is most likely not GDPR-complaint. Which you could mention to the right people and hope they listen.

    Edit: It’s illegal to use for the processing of the patients PII, because of transfer to insecure third countries and because bigtech uses the data for their own purposes without any legal basis.

    Edit 2: The same is the case with your, and your colleagues PII.

    In my opinion privacy and GDPR is the same in this case. I think most public authorities is required to have a DPO, fx hospitals or the relevant health authority. The DPO can help answer your and your bosses questions on the mentioned questions.

    Hope you figure it out.

    • I agree and I suspect this planned system might get scuttled before release due to legal problems. That’s why I framed it in a non legal way. I want my bosses to understand the privacy issue, both in this particular case but also in future cases.

    • You don’t have to use a cloud service to do AI transcription. You don’t even need to use AI. Speech to text has been a thing for like 30+ years.

      Also, AWS has a FedRAMP authorized Gov Cloud that’s almost certainly HIPAA (and it’s non-us counterparts) compliant.

      Also also, there are plenty of cloud based services that are HIPAA compliant.

  •  Spyder   ( @Spyder@lemmy.ml ) 
    link
    fedilink
    32
    edit-2
    5 months ago

    Do your patients know that their information is being transcribed in the cloud, which means it could potentially be hacked, leaked, tracked, and sold? How does this foster a sense of distrust, and harm the patients progress?

    Could you leverage this information and the possibility of being sued if information is leaked with the bureaucrats?

  • You’re going to lose this fight. Admin types don’t understand technology and, at this point, I imagine neither do most doctors. You’ll be a loud minority because your concerns aren’t concrete enough and ‘AI is so cool. I mean it’s in the news!’

    Maybe I’m wrong, but my organization just went full ‘we don’t understand AI so don’t use it ever,’ which is the other side of the same coin.

    • I understand the fight will be hard and I’m not getting into it if I cant present something they will understand. I’m definetly in a minority both among the admin staff and my peers, the doctors. Most are totally ignorsnt to the privacy issue.

  •  7heo   ( @7heo@lemmy.ml ) 
    link
    fedilink
    104 months ago

    I would have work sign a legal discharge that from the moment I use the technology, none of the recordings or transcription of me can be used to incriminate me in case of an alleged malpractice.

    In fact, since both are generated or can be generated in a way that both sounds very assertive but also can be adding incredibly wild mistakes, in a potentially life and death situation, they legally recognise potentially nullifying my work, and taking the entire legal responsibility for it.

    As you can see in the most recent example involving Air Canada, a policy has been invented out of thin air. Such policy is costing the company. In the case of a doctor, if the administration of the wrong sedative, the wrong medication, or if the wrong diagnosis was communicated to the patient, etc; all that could have serious consequences.

    All sounding (using your phrasings, etc) like you, being extremely assertive, etc.

    A human doing that job will know not to derive from the recording. An AI? “antihistaminic” and “anti asthmatic” aren’t too far off, and that is just one example off of the top of my head.

    • My question is not a legal one. There probably are legal obstacles for my hospital in this case but HIPAA is not applicable in my country.

      I’d primarily like to get your opinions of how to effectively present my case for my bosses against using a non local model for this.

      • Look to your local health privacy laws. Most countries have that tightly controlled in such a way that this use of AI is illegal.

        Your question is not a legal one, but a legal argument can be a very persuasive one.

    •  Szymon   ( @Szymon@lemmy.ca ) 
      link
      fedilink
      English
      2
      edit-2
      5 months ago

      It is until they prove it isn’t, which they might not be able to do. Many trusted 23andme only to see private data stolen. Make the company prove the security in place and the methods ensuring privacy, because you’ll essentially be liable for any failures of the system from a lack of due diligence.

  • It would be worth finding out more about how exactly the training process works, namely whether or not the AI company stores the training audio clips after training has been completed. If not, then I would say you don’t have anything to worry about, because the model itself can’t be used to clone your voice to any useful extent. Deep neural networks aren’t reversible like that. Even if they were, it’s not just trained on you, it’s trained on hundreds of thousands of people then fine-tuned to you.

    If they do store the clips though, then maybe show them this article about GitHub to prove to them that there is precedence for private companies using people’s data to train AI without their explicit consent.

  • I would suggest that that first action item would be is to ask for (in writing) are 1) data protection and 2) privacy policies. I would then either pick it apart, or find someone who works in cybersecurity (or the right lawyer) to do that. I’ve done it a few times and talked my employer out of a few dodgy products, because the policies clearly try to absolve the vendor of any potential liability. Now, whether the policies truly limit liability would have to be tested in court.

    You could also talk about how data protection, encryption, identity and access management, and governance is actually really expensive, but I’d first start poking holes in the actual policies to create doubt.

  • Stop using the digital voice recorder and type everything yourself. This is the best way to protect your voice print in this situation. It doesn’t work well as a protest or to educate your colleagues, but I suppose that’s one thing you can use your voice for. Since AI transcription is a cost saving measure, there will be nothing you can do to stop its use. No decision maker will choose the more expensive option with a higher error rate on morals alone.

      • Even if this gets implemented, I can’t imagine it will last very long with something as completely ridiculous as removing the keyboard. One AI API outage and the entire office completely shuts down. Someone’s head will roll when that inevitably happens.

        • Ah sorry, I mean removing the option of using the keyboard as an input method in the medical records system. The keyboard itself isn’t physically removed from the computer clients.

          But I agree that in the event of a system failure the hospital will halt.

          • Also, if you get the permission of someone in leadership to clone their voice, one angle could be to voice clone someone on ElevenLabs and make the voice say something particularly problematic, just to stress how easily voice data can be misused.

            If this AI vendor is ever breached, all they have to do is robocall patients pretending to be a real doctor they know. I don’t think I need to spell out how poorly that would go.

    • I work in Sweden and it falls under GDPR. There are probably are GDPR implications but as I wrote the question is not legal. I want my bosses to be aeare of the general issue ad this is but the first of many similar problems.

      The training data is to be per person, resulting in a tailored model to every single doctor.

  •  The Doctor   ( @drwho@beehaw.org ) 
    link
    fedilink
    English
    65 months ago

    The personalized data model will be trained on your voice. That means that it’s going to be trained on a great deal of patient medical history data (including PII). That means it’s covered by HIPAA.

    I strongly doubt the service in question meets even the most minimal of requirements.

  • Your voice-print is worth protecting.

    There’s already retirement funds activating “my voice is my password” by default, now. (You can, and absolutely should opt-out, if yours does.) And you can’t change your voice-print if it gets leaked. (Maybe with a professional voice coach, you could…)

    Personally, I would change employers over this, if I had the option.

    I think we’re heading towards having a group of citizens with compromised voice-prints leaked to the dark web, who have a harder time day to day through no fault of their own. Like the early SSN breach sufferers, history tells us that society says “it’s a shame”, and tries to protect the next generation properly, but doesn’t recompense those hurt by the early bullshit.

    While job searching, I would also request an accomodation, and not use the voice system. It’s much easier for the employer to retain a secretary for you, than to deal with the legal hassles that will come up if they try to fire you for not using their legal-gray-area solution.

    Even granted the accommodation, I would be looking for my next job though.

  • I assume you’ll be using Dragon Medical One. Nuance is a well established organization, with users in a broad range of professions, and their medical product is extensively used by many specialists. The health system where I live has been in the process of phasing out transcriptionists in favor of it for a decade or so.

    The only potential privacy concerns a hospital would care about would be if they are storing your transcripts on their servers, because that will contain sensitive information about patients. It will be impossible to get any administrator to care about your voice data.

    This tide is unlikely one you will be able to stem, but you could stop dictating and type it yourself.

    • Yeah, I’d be sooooo confident and reassured if I knew my doctor was prioritising the security of their voice of the security of my information… /s

      (yes, it can be both, but this post doesn’t seem at all concerned with one, and entirely with the other)

    • Thats another issue and doesn’t lessen the importance of this issue. Both are important but separate. One is about patiwnt data, the other about my voice model. Also in thsi case I have no control over the mesical records and it’s already stored outside the hospital in my case.