“Can machines think?” — The ethics of AI
By Camellia Zheng '23
Alan Turing proposed the question in 1950, yet no one can possibly offer the correct answer even in 2021. People are so eager to embed abilities into the machines by applying “deep learning”, teaching the computers to learn by themselves, as well as to invite robots into the human world, assigning identification for fictional characters and even introducing them as the other half. But are machines capable of independent thinking and emotional binds?
The word “think” has been assigned different definitions over the past years along with the development of technology. Initially, decision-making is the breakthrough of machine learning. AlphaGo, the first computer program to defeat a professional human player in a chessboard game, paved the way for the machines’ potential capability of judgements and logical analysis. Moving on to the development of human-like robots who can better analyze emotions and mimic human behaviors, a company called Hanson Robotics created Sophia as the first social humanoid robot in the world. Next, people put free will on the central discussion, wondering if robots can be assigned a background story and even individual lifestyle. That’s when the project of computer-generated Miquela Sousa, or Lil Miquela began in 2016 as an Instagram profile and Hua Zhibing registered as the first virtual student developed in Tsinghua University -- the top 1 university in China early in this month.
Apparently, many people are not satisfied with the progress which has been hindered by ethical critics since the start. As machines free people from tedious housework and repetitive streamline factories, they also somehow outcompete in many areas that are typically and exclusively associated with human beings. Once machines overcome the barrier lying under independent thinking, only emotion and feeling can possibly contain the progress. Technology in and of itself is not necessarily harmful, yet the lack of new legislatures as backup can seriously affect people’s acceptance in regards to the ethical and traditional gap between human and other creatures. For instance, “robotic romantic partners” can turn into the toys for venting emotions and violations and “automatic vehicles” would lead to further problems in traffic law enforcement if the accommodations besides the updated technology itself are being left behind. When the line blurs between humans and machines, some feel threatened, some are more excited. Will people become extinct? Who will be the future intelligent thinker and competitive dominator?