- AI robotics proposed for diagnosis, simple surgeries, patient monitoring and more
- Focus on the impact of these technologies in patient-doctor trust
- A decision to use a robot to care for someone may affect the bonds created by human caregiving.
- Trustor makes himself vulnerable based on expectations about the trustee likely actions
- "Behavioral trust" - Grounded on reliability - the trustee is predictable by the trustor
- "Understanding trust" - trustor believes firmly trustee will act in certain ways in a particular
case, for example in novel situations.
- This kind of trust can be established with a machine too, but it's easier with humans.
- Role-based trust is not a third kind of trust, but as a vehicle to establish one of the other two
- Doctors are a canonical example of role-based trust. We expect doctors to have certain values, beliefs and good intentions.
- The doctor's role has shifted from a paternalism standpoint to a collaborative one.
- Problems when patient's care needs differ from near-term desires: e.g - not enough lithum in bipolar disorder
- Doctors are licensed and grantors of licenses ensure objective criteria is met.
- If AI is introduced to one task, it threatens to displace some patient-doctor trust. Impact will depend on whether regulatory mechanisms and approval are put in place.
- It's different to have an AI that performs procedures than an AI that judges whether procedures are appropriate.
- AI for procedures could be regulated in a similar way we regulate drugs.
- We would expect the social role of doctors to go from 'know-it-all' to 'mere-user of an AI'. It's important to make the public know that the doctor is an power-user of an AI system and has a broader scope.
- Patient experiences may also undermine trust, e.g: a doctor which does not care about patient's wishes, or faulty AI
- People are often more willing to provide info to an AI than a human.
- One good example for AI use could be patient monitoring - has a patient taken meds when they're supposed to, do they comply?
- Doctors using AI systems and their results must have educational training that's overseen, measured and approved by an independent outside group
- Regulators may compel certain types of education as a precondition to use AI or robotics technologies
- AI should not be used for patient care without the educated consent of the patient.
- Use of a technology does not take priority over a patient's wellbeing - e.g: the patient could opt out