• Source: Automated medical scribe
  • Automated medical scribes (also called artificial intelligence scribes, AI scribes, digital scribes, virtual scribes, ambient AI scribes, AI documentation assistants, and digital/virtual/smart clinical assistants) are tools for transcribing medical speech, such as patient consultations and dictated medical notes. Many also produce summaries of consultations. Automated medical scribes based on Large Language Models (LLMs, commonly called "AI", short for "artificial intelligence") increased drastically in popularity in 2024. There are privacy and antitrust concerns. Accuracy concerns also exist, and intensify in situations in which tools try to go beyond transcribing and summarizing, and are asked to format information by its meaning, since LLMs do not deal well with meaning (see weak artificial intelligence). Medics using these scribes are generally expected to understand the ethical and legal considerations, and supervise the outputs.
    The privacy protections of automated medical scribes vary widely. While it is possible to do all the transcription and summarizing locally, with no connection to the internet, most closed-source providers require that data be sent to their own servers over the internet, processed there, and the results sent back (as with digital voice assistants). Some retailers say their tools use zero-knowledge encryption (meaning that the service provider can't access the data). Others explicitly say that they use patient data to train their AIs, or rent or resell it to third parties; the nature of privacy protections used in such situations is unclear, and they are likely not to be fully effective.
    Most providers have not published any safety or utility data in academic journals, and are not responsive to requests from medical researchers studying their products.


    Privacy


    Some providers unclear about what happens to user data. Some may sell data to third parties. Some explicitly send user data to for-profit tech companies for secondary purposes, which may not be specified. Some require users to sign consents to such reuse of their data. Some ingest user data to train the software, promising to anonymize it; however, deanonymization may be possible (that is, it may become obvious who the patient is). It is intrinsically impossible to prevent an LLM from correlating its inputs; they work by finding similar patterns across very large data sets. Some information on the patient will be known from other sources (for instance, information that they were injured in an incident on a certain day might be available from the news media; information that they attended specific appointment locations at specific times is probably available to their cellphone provider/apps/data brokers; information about when they had a baby is probably implied by their online shopping records; and they might mention lifestyle changes to their doctor and on a forum or blog). The software may correlate such information with the "anonymized" clinical consultation record, and, asked about the named patient, provide information which they only told their doctor privately. Because a patient's record is all about the same patient, it is all unavoidably linked; in very many cases, medical histories are intrinsically identifiable. Depending on how common a condition and what other data is available, K-anonymity may be useless. Differential privacy could theoretically preserve privacy.
    Data broker companies like Google, Amazon, and Microsoft have produced or bought up medical scribes, some of which use user data for secondary purposes, which has lead to antitrust concerns. Transfer of patient records for AI training has, in the past, prompted legal action.
    Open-source programs typically do all the transcription locally, on the doctor's own computer. Open-source software is widely used in healthcare, with some national public healthcare bodies holding hack days.


    = Encryption

    =
    Multifactor authentication for access to the data is expected practice.
    Typically, Diffie–Hellman key exchange is used for encryption; this is the standard method commonly used for things like online banking. This encryption is expensive but not impossible to break; it is not generally considered safe against eavesdroppers with the resources of a nation-state.
    If content is encrypted between the client and the service provider's remote server (transport cryptography), then the server has an unencrypted copy. This is necessary if the data is used by the service provider (for instance, to train the software). Zero-knowledge encryption implies that the only unencrypted copy is at the client, and the server cannot decrypt the data any more easily than a monster-in-the-middle attacker.


    Platforms


    Scribes may operate on desktops, laptop, or mobile computers, under a variety of operating systems. These vary in their risks; for instance, mobiles can be lost. The underlying mobile or desktop operating systems are also part of the trusted computing base, and if they are not secure, the software relying on them cannot be secure either.


    Confabulation, omissions, and other errors


    Like other LLMs, medical-scribe LLMs are prone to confabulation, where they make up content based on statistically associations between their training data and the transcription audio. LLMs do not distinguish between trying to transcribe the audio and guessing what words will come next, but perform both processes mixed together. They are especially likely to take short silences or non-speech noises and invent some sort of speech to transcribe them as.
    LLM medical scribes have been known to confabulate racist and otherwise prejudiced content; this is partly because the training datasets of many LLMs contain pseudoscientific texts about medical racism. They may misgender patients. A survey found that most doctors preferred, in principle, that scribes be trained on data reviewed by medical subject experts. Relevant, accurate training data increases the probability of an accurate transcription, but does not guarantee accuracy. Software trained on thousands of real clinical conversations generated transcripts with lower word error rates. Software trained on manually-transcribed training data did better than software trained with automatically transcribed training data (such as YouTube captions).
    Autoscribes omit parts of the conversation classes as irrelevant. The may wrongly classify pertinent information as irrelevant and omit it. They may also confuse historic and current symptoms, or otherwise misclassify information. They may also simply wrongly transcribe the speech, writing something incorrect instead. If clinicians do not carefully check the recording, such mistakes could make their way into their medical records and cause patient harms.


    Patient consent


    Professional organizations generally require that scribes be used only with patient consent; some bodies may require written consent. Medics must also abide by local surveillance laws, which may criminalize recording private conversations without consent. Full information on how data is encrypted, transmitted, stored, and destroyed should be provided. In some jurisdictions, it is illegal to transmit the data to any country without equivalent privacy laws, or process or store the data there; vendors who cannot guarantee that their products won't illegally send data abroad cannot be legally used.
    Some vendors collect data for reuse or resale. Medical professionals are generally considered to have a duty to review the terms and conditions of the user agreement and identify such data reuse. General practices are generally required to provide information on secondary uses to patients, allow them to opt out of secondary uses, and obtain consent for each specific secondary use. Data must only be used for agreed-upon purposes.


    Technology and market


    The medical scribe market is, as of 2024, highly competitive, with over 50 products on the market. Many of these products are just proprietary wrappers around the same LLM backends, including backends whose designers have warned they are not to be used for critical applications like medicine. Some vendors market scribes specialized to specific branches of medicine (though most target general practitioners, who make up about a third of doctors). Increasingly, vendors market their products as more than scribes, claiming that they are intelligent assistants and co-pilots to doctors. These broader uses raise more accuracy concerns. Extracting information from the conversation to autopopulate a form, for instance, may be problematic, with symptoms incorrectly auto-labelled as "absent" even if they were repeatedly discussed. Models failed to extract many indirect descriptions of symptoms, like a patient saying they could only sleep for four hours (instead of using the word "insomnia").
    LLMs are not trained to produce facts, but things which look like facts. The use of templates and rules can make them more reliable at extracting semantic information, but "confabulations" or "hallucinations" (convincing but wrong output) are an intrinsic part of the technology.


    Pricing


    With the exception of fully open-source programs, which are free, medical scribe computer programs are rented rather than sold ("software as a service"). Monthly fees vary from mid-two figures to four figures, in US dollars. Some companies run on a freemium model, where a certain number of transcriptions per month are free.
    Scribes that integrate into Electronic Health Records, removing the need for copy-pasting, typically cost more.
    Fully open-source scribes provide the software for free. The user can install it on hardware of their choice, or pay to have it installed. Some open-source scribes can be installed on the local device (that is, the one recording the audio) or on a local server (for instance, one serving a single clinic). They can typically be set not to send any information externally, and can indeed be used with no internet connection.


    See also


    Medical scribe
    Large Language Model
    AI hallucinations
    List of open-source health software


    References

Kata Kunci Pencarian: