how do we ensure future technology remains positive for all?

by | Jul 12, 2022 | digital transformation, intelligent automation, process excellence

Last week we shared that our copies of Tech Treats & Treasure arrived. Beth, our Process Lead, and Mark, our CEO, both contributed content to the book and we are excited to share Beth’s article below.

We highly recommend purchasing a copy at, not only as it is a fascinating read, but it also helps Cancer Central continue to provide their fantastic support.

Technology is a tool which we can use to create, advance, and improve. There are huge benefits that come with using this tool, a quick google search returns positive impacts as good news stories; a drone helping to save the life of a 71-year-old man injured in a hard-to-reach area, a young woman creating algorithms to turn sign language to written word in real time, a paralysed man able to play a musical instrument for the first time using his eyes. Like all tools, they can deliver both a positive and/or negative impact, sometimes disruptive or abused through confusion re purpose, lack of appropriate training or even through carelessness leading to errors.

A popular transportation company experienced a chilling example of this in 2016, when testing out some self-driving cars powered by onboard AI (Artificial Intelligence) computers. One of the fleets allegedly failed to recognise 6 red lights in a busy intersection with pedestrians present. It was originally claimed the incident was down to human error however media investigated further, finding internal documents showing the error was due to the vehicles mapping program not recognising the traffic lights. With self-driving cars just one of the ways technologies are set to revolutionise the transportation industry, this is a scary mistake and one that we need to learn from fast when putting decisions and control points into automated hands. Especially as this is quickly evolving to equip technologies to learn from real life actions, making their own rules based on evidence led situations.

The above raises an important question – how do we make sure these tools make the positive impact intended? As a Process Lead, I see first-hand the way emerging technologies can improve business efficiency for my clients, their people, and customers. The most successful implementations are those that embed well with the people, in my experience this is through collaborative analysis, recognising ways of working and understanding the up and down stream impact. Having good representation of the people impacted along on the journey and strong governance framing the delivery of detailed, tested and well transitioned to-be designs, is vital to any positive technology implementation.

Experience has taught us that technologies and their impact on people should always be at the forefront. Therefore, the recent technology roll-out of a system that ranks its citizens on their “social credit” in China really grabbed my attention. For context, people and/or companies are measured and then rewarded or punished on their everyday interactions. The system is currently voluntary, though the plan is for it to become mandatory and unified across the nation, with each person given their own unique code used to measure their social credit score in real-time. For those scored poorly, this could mean they are unable to buy premium plane tickets or get good credit for mortgages.

It has been contended that the above type of scheme would equalise the playing fields with privileges becoming available for those with good behaviour, rather than power and money. Alternatively, the extremity of constantly being watched is being heavily debated with concerns around privacy and automated decision making. Opinions between the scheme participants themselves have also vastly differed. Some of those involved have fed back of a great initiative that is making them become better people. Others are comparing it to a constant ‘Big Brother’ type performance scheme, used to control and coerce good behaviour rather than steer it.

Surveillance Technology Image

The proposed rollout of standardised and automated decision making, deciding real life consequences for people, drew similarities to an episode of a popular technology TV Series, ‘Black Mirror.’ Individuals were steered into good behaviours by being rated by peers following social interactions, leading to positive or negative societal consequences dependant on the rating. The ‘Black Mirror’ series features multiple episodes showing how technology could continue to evolve and be applied within society. It is often debated whether the way an episode unfolds is down to the technology itself or the people’s use of it and their responsive behaviours.

Like the above, some people promote the use of technology to deploy and decide positive or negative reinforcements on behaviour. Some of the more provoking episodes enact differing views on whether this is a dramatisation of a distant and far away dystopian future or more current and developing technology advancements. Concerning the latter, is it important to be objective on the potential impact of any advancement? How do we truly determine a positive impact, when outcomes can be viewed so very differently by individuals?

With AI poised to become a huge movement in the next 5 years and most major technology companies investing in AI technology right now, we should all be aware of the associated risks and lessons learned. The robot seems to make the perfect villain on the big screen and with robot-fear increasingly real, this area of advancement needs be better understood. The fear of the unknown depicted in the dramatisation still appears to be very real with concerns around embedding bias into machines. This is valid and something that we should be talking about a lot more. Particularly conversations to align this type of decision-making technology to zero bias regulations, just as we do for ISO (Information security) certificates and GDPR (General Data Protection Regulation). If the AI seeds we are planting right now are to become our future decision makers, we need to ensure this is done fairly, through evidence led policies and guidelines. Should we be giving unregulated decision-making abilities, and the consequences on people, to machines?

It is widely agreed that AI and emerging technologies can be used to create better and more efficient ways of working and living. A fitting example of this is the work being done in the SLR (Sign language Recognition) space, with algorithms to translate hand movements to written words. The RNID, Working for Change Report found that 35% of business leaders surveyed in the YouGov poll did not feel confident about employing a person with hearing loss. Their Hidden Disadvantage Report found that 70% of people with hearing loss, who responded to the survey, said that hearing loss sometimes prevented them from fulfilling their potential at work. This is therefore a great positive stride in this space, and I look forward to seeing how the technology unfolds to help remove this barrier. Continuing to push boundaries creates a real and positive opportunity to harness standardised decision making and machine learning and remove gender, race, age, social class, and disability bias. Over time this really could help to drive out discrimination and privilege in the workplace and other areas of our community.

It is the person behind any tool who shapes the motive for use and the form and range of the impact it yields. For example, a hammer can also be used to create, advance, and improve but can also be used to unsettle, control, and even hurt people when deployed incorrectly or misused. We should look at technology in the same way, having the ability to create momentous change, steer good behaviour and help people but also creating the ability to control and carry forward bias and bad behaviours from people to machine. This brings me to my final question; how do we ensure this does not happen?

The UK government recently conducted a review into bias in algorithmic decision making, setting out some key next steps for the government and regulators to support organisations to get their algorithms right, whilst ensuring the UK ecosystem is set up to support good ethical innovation.

A core theme of the report is that we can now adopt a more rigorous and proactive approach to identifying and mitigating bias in key areas of life, with clever use of data enabling organisations to shine a light on existing practices and identify what is driving bias. When changing processes that make life-affecting decisions about individuals it is advised to always proceed with caution. The report recognises the importance of realising that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.

Artificial Intelligence Algorithm Image

Other key takeaways include the need for senior decision makers in organisations to engage with understanding the trade-offs inherent in introducing an algorithm. To be able to make informed decisions on how to balance risks and opportunities, they should expect and demand sufficient explainability of how an algorithm works, before deploying it into a decision-making process. Transparency is also key here in helping organisations build and maintain the wider public’s trust in such implementations.

Findings contend that society will need to be engaged in this process. Technical expertise is required to navigate choices, however fundamental decisions about what is fair cannot be left to data scientists alone. Decisions should only become legitimate if society agrees and accepts them. Regulators and industry bodies should work together with technical experts and wider society to agree best practice within their industry, establishing appropriate regulatory standards. The review recommends that there should be clear standards for anticipating and monitoring bias, for auditing algorithms and for addressing problems, encouraging the CDEI (Centre for Data, Ethics and Innovation) to play a key role in supporting organisations, regulators, and government in getting this right.

There is an ethical obligation to act wherever there is a risk that bias is causing harm and instead make fairer, better choices. Whilst embracing the positive impacts of technologies, it is also important to question whether this trajectory could change. It appears more work is needed in this area, with regulations welcomed to prevent/reduce any potential biases in this field, ensuring technology remains positive for all in the future.

We highly recommend purchasing a copy of the Tech Treats & Treasures book at as not only does it contain lots more interesting thoughts and snippets from technology leaders, but it also helps Cancer Central continue to provide their fantastic support.

Mark Davis

Want to receive more content like this? Click to subscribe to our newsletter!

AI for Onboarding: A glimpse at the not-so-distant future for HR Directors

Artificial Intelligence (AI) has experienced a breakthrough year in 2023. But how we embrace AI and Machine Learning (ML) without losing sight of the bigger picture – and how we can be “better together” – is a conversation that we should all be having.

With that in mind, we invite you to see how AI has impacted Santa Clause this year in our short Christmas video.

read more

embracent Christmas Video 2023

Artificial Intelligence (AI) has experienced a breakthrough year in 2023. But how we embrace AI and Machine Learning (ML) without losing sight of the bigger picture – and how we can be “better together” – is a conversation that we should all be having.

With that in mind, we invite you to see how AI has impacted Santa Clause this year in our short Christmas video.

read more

Pin It on Pinterest