What role should employment law play in regulating the use of artificial intelligence and automation in hiring and workplace management practices?
- Rania Ashraf
- Jun 3
- 6 min read
Human beings are entering a new, uncharted era for humanity in which powerful artificial intelligence (AI) and chatbot systems are becoming increasingly prevalent. The use of AI in the legal field has significantly increased over the past few years and its impact must not be understated. This essay will understand AI as the use of algorithms and machine learning to carry out tasks typically carried out by humans. In the hiring context, AI and automation help with resume screening, using tools like ATS software. An ATS is a software that helps companies manage the recruitment process. ATS does this by collecting resumes, ranking candidates, and filtering out resumes. Moreover, workplace management includes the use of AI to manage tasks like performance monitoring and scheduling. While talking about regulating AI, there are a few important questions to consider - companies train AI on previous candidates they have employed to predict employability. Thus, if a company has been known to hire or predominantly leave out a minority group, the AI will consider this and make similar decisions, opposing the employment rights bill passed by the UK on the 17th of July, particularly the clause on ‘the law on unfair dismissal’ (Skillcast, 2023). The integration of AI must also be balanced out with the potentially disastrous impact on the workforce. Regarding workplace management practices, the incorporation of AI also raises important legal and ethical concerns.
Currently, the UK does not plan to introduce broad, overarching AI regulations which are known as ‘horizontal regulation’. Instead, the UK works on a principle-based framework, where existing regulators in specific sectors, such as healthcare or finance, will oversee the use of AI in those specific areas. On July 17, 2024, the government proposed a new measure for developers of AI models, signaling a departure from the earlier flexible approach. A ‘digital information and smart data bill’ was announced, aiming to reform data laws to support safe AI development (White & Case, 2024). Therefore, despite the UK having some regulation towards AI, it is quite broad and does not specifically apply to the use of AI in hiring and workplace management. Thus, employment law in the UK must evolve and incorporate regulation in hiring and workplace management.
Integrating AI and automation in hiring and workplace management has changed the employment law sector and companies increasingly use AI for hiring. Companies train the AI on previous candidates to predict potential candidates' employability. AI algorithms are set up to convert input data into output judgments and are trained to replicate human employment practices. However, when employers use AI they assume that it is objective and therefore can manage hiring free from biases affecting human judgment so companies can improve the selection of employees they hire. According to this, it may seem logical to conclude that the risk of discrimination and unreasonable rejection is less when using AI, but this has not been the case. A prime example of the consequences of this can be illustrated by the multinational company, Amazon. Amazon has not only actively been using AI to recruit employees, but has already faced legal problems because of it. Amazon’s algorithm was found to be hiring exclusively men, which resulted in the case coming to lawsuits, and eventually, the company had to stop using AI to hire employees (Matheson, 2023). At a personal level, many U.S. adults say they would not want to apply for a job with an employer that used AI to help make hiring decisions. 66% say they would not want to apply for a job under those circumstances, compared with 32% who say they would want to apply, because of the fear of AI not seeing the candidates' potential (Rainie et al.). Employment law therefore must enforce requirements that AI used for hiring is regularly checked for bias and enforce diversity standards. Employment law should include that decisions made by AI in the hiring process are checked and monitored by professionals to make sure that discriminatory decisions are not made. If such decisions are made, employment law should enforce penalties and punishments to companies allowing them to be made. The UK's ‘Discriminatory and Equality Act of 2010’ (GOV) could be revised to include bias and prejudicial decisions made by AI.
In addition, the rapid advancement of AI and automation can replace certain jobs in the workplace. While AI can enhance productivity, possibly making it easier for employees, it can also come across as a threat to job security. 61% of large US firms plan to use AI within the next year to automate tasks previously done by employees (CNN, 2024). Therefore, employment law must evolve to tackle these challenges in the workplace efficiently. A big challenge is to balance the increases in efficiency that AI provides with the potential impact on the workforce. Laws could enforce that companies need to put in place rules that reskill their employees to work alongside AI rather than be replaced by it. Moreover, employment laws could establish guidelines for ‘responsible automation’ where companies are required to assess and test out the AI before actually implementing it in the workplace, and see if it merely helps people complete their job or does the job for them. Such assessments would evaluate the potential impact on workers and provide strategies for mitigating negative effects, such as offering alternative roles within the company. By protecting workers’ rights to fair employment opportunities, employment law can ensure that workers don’t lose their jobs and AI does not become a threat to job security.
AI algorithms can also be complex and lack transparency, which makes it increasingly difficult for employees to understand how decisions are made. When an AI system makes a decision, it is hard to pin the blame on someone for making the decision. This lack of transparency can reduce trust and accountability in hiring and management processes. For example, candidates may be rejected without knowing which factors influenced the decision, creating a sense of unfairness and inequity (Chapter Eleven: Challenges in Using AI from Artificial Intelligence and National Security on JSTOR). To address this issue, employment law should enforce requirements for AI systems to explain why they chose to accept or deny the candidate. Employers should be required to disclose when AI is used in the hiring process and provide candidates with explanations of how decisions are made. This transparency can extend to providing unsuccessful candidates with feedback based on objective criteria, helping them understand and improve their profiles. Employment law could also require that companies keep records of AI decisions and make this information accessible if someone claims that they were prejudiced or discriminated against (Goodman & Flaxman, 2017). AI systems must be transparent to ensure trustworthiness and fairness for candidates. Employment law should mandate that despite AI being used for hiring, employers should still be involved in the hiring process and be able to justify the reason why candidates were accepted or denied.
In addition, when it comes to workplace management, employment law can play a huge role. Previously, most companies had a simple and basic way of managing payrolls which involved tedious and meticulous tasks to be done by employees. Not only did this make the chance for errors high, but this required a lot of physical effort for employees. It meant some companies would even have to hire more people to get this done. Thus, the introduction of AI when it comes to workplace management issues such as these is a rapid advancement, allowing companies to be more cost-effective and productive. Payroll automation uses technology to streamline and manage an organisation's payment processes. Companies use software that essentially handles everything from calculating wages to tax deductions (KPMG, 2023). However, the issues that arise when solely using softwares such as this is that AI systems may miscalculate minimum wage laws or other important labor standards. Companies may be so reliant on AI that they rarely check if payrolls are being processed according to how they should be. Thus, employment law should mandate regular checks of AI systems to ensure accurate calculations of wages and benefits. Employment law should mandate a system that requires employers to verify that AI systems comply with local and national labor laws.
To conclude, AI has changed the legal landscape and brought about many changes, which have included both opportunities and challenges. Despite some people thinking that AI would be a less prejudiced way, AI is trained on previous data and therefore ATS softwares often replicates this in hiring, creating discriminatory hiring practices. Thus, employment law should evolve to reduce these biases and certify that all candidates are being given an equal opportunity. In addition, the increase in efficiency posed by AI should be balanced by reskilling and training workers. Laws should enforce that companies need to put in place rules that reskill their employees to work alongside AI rather than be replaced by it. Moreover, the lack of transparency in AI should be combatted by companies keeping records of decisions made by AI. AI should be used as a tool to assist employees, but not be a separate entity in itself being trusted to make decisions on behalf of the company. Thus, employment law must evolve to establish fairness, accountability and inclusivity in hiring and workplace management.








Comments