Do Androids Dream?
The possible consequences of Artificial Intelligence (AI) have been explored in literary and cinematic arenas for decades . Books and movies/TV shows such as ‘Do Androids Dream of Electric Sheep’, to the cartoon ‘The Jetsons’, to the movie ‘iRobot’ to the existential book ‘Our Final Invention: Artificial Intelligence and the End of the Human Era’ – all have expounded on the possible consequences of allowing AI to become integral parts of our day to day lives. They have helped shape our views and perceptions of what AI is and what it could become.
Two movie franchises that really shaped the way I, myself, view AI were ‘Terminator’ and ‘The Matrix’. The destruction of society by sentient beings that were created BY us that ultimately destroy us is a terrifying thought to me. My perception of AI (based on movies such as these); has always been one of distrust in allowing the technology to become too controlling, too self-aware, and too integral to my way of life. I have been a strong believer in human oversight when it comes to tech that can essentially learn and grow.
Over the years I have relaxed a bit with my distrust in AI as I have seen the powerfully positive ways in which AI has enhanced our lives. Some examples of that are Natural Language Processing Tools and Proactive Healthcare Management AI (an example is Covid tracking/predictions). BUT, and that is a big but – I still wholeheartedly believe in human oversight where AI is concerned.
What brought me to this topic today was an article about how the UK Government would like to scrap the right to ask for a human review in decisions made entirely by AI systems. The article can be found here. Even computer experts think ending human oversight of AI is a very bad idea.
In a consultation that was launched earlier this year, the Department for Digital, Culture, Media and Sport (DCMS) invited experts to submit their thoughts on some new proposals designed to reform the UK’s data protection regime.
Among those featured was a bid to remove a legal provision that currently enables citizens to challenge a decision that was made about them by an automated decision-making technology, and to request a human review of the decision.
The consultation determined that this rule will become impractical and disproportionate in many cases as AI applications grow in the next few years, and planning for the need to always maintain the capability to provide human review becomes unworkable.
Thankfully experts in AI strongly discouraged such a course of action. While this particular instance may seem (on the surface) to be a foolhardy attempt at an end-run around bureaucracy; such a ruling can have significant ripple effects going forward for decisions around oversight for AI. If they are successful in allowing the process to be driven entirely by AI processes; then the case will soon be made that other systems and processes can run in AI unchecked by human oversight. That is a slippery slope.
There is still much debate as to whether self-aware AI will ever exist, and if it does – what the timeline is. (If you have not seen the movie ‘Free Guy’, that is a great example of a Utopia AI environment) Unfortunately and probably correctly, Pew Research doubts that ‘ethical’ pursuits in AI are coming anytime soon. They foresee a continuation of AI development for-profit and social control over the next 10 years.
This brings us right back around to why I chose to write about oversight in AI and why I join the experts in agreeing that it is a VERY bad idea to remove human oversight and intervention as far as AI goes for the foreseeable future.
I’ve seen that movie and read that book – no thanks.
– OpEd by Jennifer Gilligan 10.14.21