Artificial intelligence in construction management: the ethical imperative

by | May 31, 2021 | Article

The construction sector is starting to adopt artificial intelligence (AI) and benefit from the technology’s ability to take past data sets and anticipate the future. Think safer workplaces and better procurement models. However, at the same time, the technology is presenting construction businesses with some new dilemmas. With more information comes more responsibility.

How can a construction organisation ensure robust data privacy and governance construction processes? What is the right level of transparency for different types of information? And how can you provide fairness and discrimination-free job sites with sensitive information, and insights, at your fingertips?

AI is an incredibly powerful technology. Yet it’s readily available to anyone with a computer and access to the Internet. This mix of power and accessibility has some people advocating for the introduction of stringent ethical standards.

It is hard to argue against designing and applying safeguards to minimise the possibility of malicious abuse and unintentional misuse. But ethical interventions can be problematic. They are grounded in moral principles and often mean different things to different people.

Europe is at the forefront of AI ethical thinking. In 2019, the European Commission released its Ethics Guidelines for Trustworthy Artificial Intelligence. According to the Guidelines, trustworthy AI should be:

  1. Lawful, respecting all applicable laws and regulations;
  2. Ethical, respecting ethical principles and values; and
  3. Robust, from a technical and social perspective.
The guidelines included below put forward a set of seven essential requirements that AI systems should meet to be deemed trustworthy.

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, ensure proper oversight mechanisms, which can be achieved through continued human interaction (human-in-the-loop) and ongoing human control and oversight (human-on-the-loop/human-in-command approaches).
  2. Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe and ensure a backup plan if something goes wrong. They must also be accurate, reliable, and reproducible. That is the only way to ensure that harm, including unintentional harm, can be minimised and prevented.
  3. Privacy and data governance: Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be provided, considering the quality and integrity of the data, and ensuring legitimised access to data.
  4. Transparency: The data system and AI business models should be transparent. Traceability mechanisms can help achieve this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system and must be informed of its capabilities and limitations.
  5. Diversity, non-discrimination, and fairness: Bias must be avoided, as it could have multiple negative implications, from the marginalisation of vulnerable groups to exacerbating prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life cycle.
  6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations. They should be sustainable and environmentally friendly. Moreover, they should consider the environment, including other living beings, and their social and societal impact should be carefully considered.
  7. Accountability: Put mechanisms in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes, plays a key role, especially in critical applications. Moreover, ensure adequate and accessible redress.

The guidelines are likely to pose real implications for construction businesses in the near future. To demonstrate the point, we have outlined three hypothetical examples below.

Breaches of privacy, transparency, and non-discrimination requirements

Imagine that SafetyAI, a fictional AI solution, is up and running at one of your construction sites. SafetyAI scrutinises and processes thousands of parameters on all aspects of the site, keeping people safe. While doing so, it collects large amounts of data on Joe, a long-term employee.

SafetyAI tracks Joe’s body temperature, facial expressions, pulse, body movements, and timing and frequency of his breaks. It also has access to all Joe’s medical records that are available to the company. Let’s take a look at three possible scenarios that could play out based on these facts.

In the first scenario, which focuses on data privacy and governance requirements, SafetyAI inadvertently discovers that Joe suffers from a rare disease. This disease does not prevent Joe from carrying out work allocated to him, but it may affect his performance in the future.

What if Joe’s manager takes this information, and uses it against him, making him redundant? What if Joe stays on the job, but your insurance company finds out about his health situation? What if Joe has no awareness of this rare disease?

In the second scenario, SafetyAI suddenly warns your site leader of an imminent incident that is likely to involve Joe. The manager immediately puts the construction site on hold and requests Joe to step aside. The manager asks SafetyAI for more information to clarify the issue. While SafetyAI can forecast an incident with a high likelihood, it can’t explain why.

This scenario showcases the system’s limitations and lack of transparency. The system isn’t equipped to explain its decisions in a way the manager and site staff can understand. This type of outcome could have been avoided by making different technical and implementation choices so the reason ‘why’ could be understood.

The third scenario, like the second example, involves a potential incident. Once more, the construction site stops, and Joe is requested to step aside by his manager. This time, when the manager seeks clarification, the information driving the insight is accessible and transparent.

It turns out that SafetyAI had reviewed the most recent safety incidents and identified that all events involved workers from a specific country of origin. Of course, using a country of origin as a parameter is irrelevant and a form of bias and discrimination. In this example, the algorithm has unfairly targeted Joe and the other workers.

If your organisation faced these dilemmas how would your team respond? These fictional examples highlight the importance of understanding the obligations and possible impacts on your business. It’s often too late once an incident happens. Planning, procedures, and early actions will help to protect your workers, organisation, and reputation.

The autonomous paradigm shift in construction is coming

One day, autonomous systems will be commonplace, including on construction sites. Think large hoisting and lifting equipment, automated wall-erecting systems, load-carrying robots, welding and cutting machines, and quality control drones. The use of, and transition to, autonomous systems has its own set of ethical quandaries.

How will humans and automated machines co-exist on a job site? Will this increase or decrease the likelihood of accidents in the short or long term? It’s fair to say that AI systems will be keeping watch, computing real-time risk thresholds, and aiding decision-making throughout the journey.

One unresolved topic of debate is the benefit of human involvement, or an automatic AI response, in an emergency. What if immediate action is required to save a life? Who is better placed to make this call…the human or the AI system?

Experienced people can make effective snap decisions, seemingly without conscious thought. If AI systems will be required to perform this function, the systems need to be programmed that way in advance.

With an ethical framework in place, AI promises to make the construction industry safer and more productive than ever. However, each organisation should consider the risks and dilemmas alongside the benefits before implementing an AI solution. AI designers and implementers also have an essential role to play in tackling complex and highly sensitive ethical questions with the utmost care, transparency, and expertise.

About Denis Branthonne

Denis is the founder and CEO of Novade and believes in a hands-on approach to building projects. A business leader with a track record in building global teams, Denis grew up in France, obtained his Master of Science degree at MIT, and previously worked at Autodesk as VP of Emerging Markets and APAC. He remains a firm believer in the ‘get it done’ philosophy.

About Novade

Novade has a team of digital specialists dedicated to supporting clients in their digital transformation from the ground up. With global experience on a wide range of construction projects and processes, the team will be able to quickly adapt to your needs from specification through to delivery and on-site support.

Related content

READ MORE:

See how Novade can help your
site processes

Subscribe to our newsletter

Get the latest updates, trends and news directly in your inbox.