Last week we reported on the five areas where we see AI making the greatest impacts in 2021 and beyond. This week we want to touch on some of the AI’s ethical implications in the areas that most relate to our clients in healthcare, business development, and the legal field.
Last year the AMA Journal of Ethics wrote a fascinating article about the benefits and challenges of integrating AI in healthcare. These benefits include faster and more accurate diagnoses, applications in robotics and prostheses, and the delivery of telemedicine. But they highlight issues that they suggest are reason enough to pump the brakes on a speedy integration of AI in medicine.
One issue is balancing the risks and benefits of using AI, with particular consideration as to how AI should be integrated.
This issue was brought to light based on an empirical study that found that machine learning algorithms may not provide equally accurate predictions of outcomes based on demographics such as race, gender, and socioeconomic status. This issue of AI learning and reflecting bias is widespread and has serious implications. It’s a salient reminder that AI is only as good as how it interprets data, which means that just like humans it is still susceptible to learned bias and confirmation bias. Bad data, bad outcomes.
A Health Affairs article also notes the issue of bias in AI, and poses the question of who takes responsibility when AI is built into healthcare software. The issue of responsibility is especially important if AI makes a mistake that has an impact on a person’s health or life. What happens and who takes responsibility when AI makes a mistake or an incorrect diagnosis that has life-threatening consequences? The medical field literally works between the realms of life and death, so even a small chance that AI could make a mistake in diagnosis or treatment recommendations is not one to take lightly.
One overarching concern about the effect that AI will have on business in general is mass job loss as AI becomes more capable and robust. A recent McKinsey Global Institute report states that 800 million people could lose their jobs to AI by 2030. Job loss leads to wealth inequality which perpetuates a host of other social issues, such as food and housing insecurity, and gender/race wage gaps. Even with the potential for AI to create jobs, AI is already accomplishing tasks at a faster, cheaper rate for employers, leading to more company profits that will not trickle down, leading to greater income inequality and a moral/ethical grey area.
As with healthcare, the issue of bias in AI comes into play as companies begin to use artificial intelligence as part of their hiring procedures. Bots can efficiently comb the web for personal information from social media, analyze writing samples, use algorithms to assess speech content, employ the use of facial and voice recognition, and use that information to make judgements that may or may not be fair and equitable.
Employers aren’t allowed to ask questions about a job applicant’s religion, marital status, orientation, and other personal details, in order to eliminate discriminatory hiring practices. But a bot can quickly find out someone’s place of worship from a Facebook check-in, if they are divorced by searching public records, what dating apps they may use, and the list goes on!
While AI could certainly make the hiring process more efficient, it can make it easier for companies to discriminate against qualified applicants, whether by design or by default.
Much like the concerns of bias in the medical field, there are also concerns about the flaws inherent to the ability of AI to provide objective and impartial support in the legal field.
A recent article by the International Legacy Technology Association points out that while AI has the potential to impact the delivery of legal services, development and use of AI tools are limited in the field. There are challenges with regard to data, algorithms, and implementation. As noted, AI may pick up on and perpetuate human bias and is not yet sophisticated enough to understand nuanced issues that can often affect the outcome of legal cases.
The use of AI in the legal field brings up ethical issues beyond protecting clients from biased algorithms. It also brings up the concern of protecting client data, providing accurate and dependable legal counsel, and whether or not people would feel comfortable using an AI tool rather than a human being for legal counsel, even in cases that seem simple or straightforward on paper.
One argument for the use of AI in the legal field is that it has the potential to be more cost-effective for clients, but again, what happens if the AI makes a mistake either because of bias or because it isn’t capable of understanding nuance? Who is held responsible? A related reason for AI being hotly contested as a viable solution for bringing legal services directly to consumers is that it could fall under Unauthorized Practice of Law (UPL). Even the use of chatbots to answer basic legal questions has the potential to be contested, since there is not a single uniform definition for what is defined as UPL across American jurisdiction.
This also brings up the related issue of technology competence, which means that lawyers who use technology must be able to understand it enough to be confident that they can use it in a way that complies with their ethical duties. Obviously this doesn’t mean that every lawyer has to be an IT expert, but the issue of competency can be subjective and problematic. The use of AI as a tool would certainly fall into this grey area.
The future of technology and business, regardless of industry, is in AI. While the ethical dilemmas that AI presents are complex, a common thread is that AI is not yet intelligent enough to recognize and eliminate human bias. In addition, there is always the potential for mistakes in decision making based on corrupted or incomplete data.
You may not be ready to implement AI into your business, but we are experts at setting up small businesses like yours to embrace a digital future. Get a tech assessment and ensure your data is secure and accurate with the support of Tech Masters.
“Who takes responsibility when AI makes a mistake?”
Want new articles sent right to your inbox?
Subscribe to our Monthly Tech Guide.
More Blog Posts
SIM Swapping: How to recognize an attack and what to do about itSim swapping is a fairly new type of identity theft. It occurs when a scammer tricks a victim's phone carrier into transferring their phone number to another SIM card, without the victim's knowledge. This...
Prevent Phishing Attacks: How to Keep Your Business SafePhishing is a scam, a form of social engineering that aims to steal your personal information, such as passwords, usernames and financial data. It’s easy for hackers to send out mass emails that look like they...
5 Projects a Managed IT Service Provider Does for Your Business that You Can't Do AloneEvery business needs quality IT in order to be competitive. The problem is that many companies don’t have the in-house expertise or the time to manage it. This is where a managed IT...