Artificial intelligence is coming to the social care and healthcare sector – are you ready?

AI already has a strong presence in the social care and healthcare sector, and its presence will only increase. It holds a lot of promise, but it also entails risks and uncertainty that are difficult to control. How will the ethical principles of social care and healthcare be realised in the application of artificial intelligence?

Image text
kuva: iStock

Artificial intelligence refers to a machine’s ability to imitate skills traditionally associated with human intelligence, such as deduction, learning, planning or creation. However, despite its name, it is neither intelligent nor wise, but an error-prone mimic.

AI is not a new invention, as it is already being applied in the social care and healthcare sector in areas such as diagnostics, patient monitoring and operational planning. It used to aid us as narrow applications in the most mundane tasks, such as spam filtering, predictive text input, chat bots and other similar functions.

Expectations are great regarding the application of AI and new innovations provided by it – particularly in areas with major needs and minor resources, such as securing safe housing and sufficient care services for the elderly. 

Some solutions have already been found. For example, in treatment and care services, AI can be used to monitor the client remotely, identify hazard situations, call for help, and take care of interaction and dispensing medicine. It has currently been estimated that 20% of indirect care work can be carried out with robots and other technological aids.
 

Will the use of AI cause ethical problems in the social care and healthcare sector?

Privacy and data security have been proven to be very vulnerable with conventional technology, so the data protection level of AI must be higher than it is today. In particular, securing the integrity and constancy of data throughout its usage lifecycle is essential when applying AI, as only that can ensure the correctness of the procedures carried out and the decisions made.  Deficiencies in these aspects can cause distortions leading to errors and issues in the fulfilment of individual needs, as well as unfair treatment.

The realisation of clients’ and patients’ rights when applying AI to treatment and care work requires meeting ethical requirements in a robust manner, from programming to application. In concrete treatment and care, clients’ and patients’ awareness of and consent to the application of AI, understanding of its meaning and participation in the decision-making are key. The realisation of all this requires new know-how from both the staff and the patients and their loved ones.

The limitations, deficiencies and risks of the new technology must also be understood, and it must be ensured that human perspectives and the assumption of responsibility are realised in decision-making and service production. As part of the data security of AI and the quality of the care provided, it is essential that the involvement, actions and results of AI are documented comprehensively and understandably and information provision to all parties is ensured. This will also facilitate evaluating and developing the functionality of AI.


AI must not cause additional work for social care and healthcare professionals

AI is often seen as a significant facilitator of profitability and effectiveness development, which it is, but it also involves its costs and resource needs that are important to respond to.

Above all, AI needs its own caretakers to ensure its functionality, safety, development and supervision. Having AI only cause new additional work for social care and healthcare professionals will not work, nor can it be laid on top of existing work. Instead, the work, methods, tools, know-how and roles must be reformed.

AI should be used for work which it is good at, such as analysing large quantities of data for decision-making, supporting monitoring and procedures, or taking care of routine tasks such as documentation in services and administration. It can also provide interactive support in care and treatment processes and assist both the staff and the patients in learning and developing. Similarly, it can promote the evaluation and development of operations, as well as research.

AI is a good servant but a poor master

AI is currently in a strong developmental phase, and it is difficult to name all the areas in which we will be able to apply it in the future. Utilising AI definitely holds major opportunities, but it also entails major risks, as it is not yet subject to sufficient supervision and regulation. An operator applying the new technology must take this into account in its operations and related risk management.

In the social care and healthcare sector, AI is shaping up to become an excellent servant, but a master it cannot be. Human interaction, empathy, professional evaluation and decision-making are features that cannot be replaced with a machine and will remain in the capable hands of social care and healthcare professionals.