Early last year, Massachusetts State Police began testing ‘Spot’, a robot dog, to assist with incidents and training. The Boston Dynamics robot, complete with 360 record and replay vision and mobile manipulation of objects, is eerily similar to the killer robot dogs that forced Bella, the cop-figure protagonist, to commit suicide to escape the dogs in the Black Mirror episode ‘Metalhead’. While the dog in ‘Metalhead’ is portrayed, for obvious reasons, as more lethal, with heat and radio detectors, lock picking technology and the ability to expel exploding bullets, the replica reminds us exactly why our generation finds Black Mirror so mind bending – that we are aware of its very real potential to become our future and the dangers it could present.
While the regulations surrounding AI are currently in place and working to keep up with new technological advancements, the ACLU released a statement outlining that “All too often, the deployment of these technologies happens faster than our social, political, or legal systems react.” If AI were to flourish within our society, laws would have to be created to pre-empt every possible outcome of AI activity, hindering the process of production for now. However, there have been issues with transparency, as while Massachusetts police admit the dogs are not used to “physically harm or intimidate people”, and ‘Spot’ has only been used for observation, the assistance of bomb squads and help with “two incidents”, no further information has been disclosed. This means the potential of these robots is unknown and laws cannot be created accordingly, or more importantly, could the morals of our society develop at a rate and create laws that can match the development of AI?
“All too often, the deployment of these technologies happens faster than our social, political, or legal system react”
Even with the regulations currently in place, mistakes happen and there are inevitably unintended consequences of AI. For example, the accidental injury of a bystander during a mission, or the misidentification of a target. These instances bring up a whole host of legal and moral questions such as who is to blame, and who can make the decision of the importance of one life over another. Last September, a self-driving Uber killed a pedestrian, mistaking her for an unknown object, as a vehicle and then a bicycle, miscalculating expectations of its future travel path before braking too late and hitting her. While the legal system was able to give accountability to Vasquez, the safety-driver of the car, the robot dogs have no current human monitoring to that extent, as logistically it would be difficult. This again is where issues of accountability lie, and where actions with potentially reduced consequences for humans could occur, opening up potential for crime to occur on a mass and unmanageable scale.
“These instances bring up a whole host of legal and moral questions such as who is to blame”
In terms of the potential for AI to replicate the dogs from ‘Metalhead’, it seems achievable, with features such as heat detectors, radio frequency bandwidth, metal adaptations and recharging ability already evident in today’s technology. Essentially, the only thing preventing AI like this from occurring in our reality is the legal system. However, while there is still an issue with transparency in labs and the testing of AI in real life, and loopholes within in the legal system, its real potential is unknown.
Roma Coombe
Featured image courtesy of Steve Juvertson via Flickr. No changes made to this image. Image use license found here.
For more science content, as well as uni news, reviews, entertainment articles, lifestyle, features and so much more, follow us on Twitter and like our Facebook page for more articles and information on how to get involved! If you would like to write Science articles for Impact Lifestyle drop us an email at lifestyle@impactnottingham.com.
Sources
https://www.loc.gov/law/help/artificial-intelligence/americas.php