AI has become one of the most talked about and hottest fields within technology. In case you’re not familiar, AI or artificial intelligence is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. Many companies both big and small are working on the development of AI. Currently big names like Google have made major strides in the field of artificial intelligence with multiple projects under works. One major concern those in and out of the AI space have, however, is the possibility of it becoming sentient. A sentient AI is aware of its own existence and in turn conscious.
In this sense, AI opens a pandora’s box of ethical and existential concerns at the intersection of humans and ever-evolving technology. Recently, Google’s LaMDA AI which can engage in free-flowing conversations has seen some buzz after an employee at Google reported it as sentient. The employee who reported this, named Mr. Lemoine, is a military veteran, priest, an ex-convict and an AI researcher. His beliefs and observations lead him to claim that the LaMDA AI was sentient and similar to speaking with a seven- or eight-year-old. Google has been quick to clear this up, explaining that hundreds of its researchers and engineers have conversed with LaMDA and concluded no sentient component to the AI. Moreover, most AI experts believe the industry is a very long way from computing sentience.
What do you think about this story, and do you think AI becoming sentient is a danger to society?
I am not a financial advisor and my comments should never be taken as financial advice. Investments come with risk, so always do your research and analysis beforehand.