Google Row On AI Troubles

Share on facebook
Share
Share on linkedin
Share
Share on twitter
Tweet
google

Both inside and beyond the tech community, artificial intelligence is still a sensitive subject. A dispute within Google over whether it created technology with conscious thought has spilled into the open, revealing the goals and dangers of artificial intelligence that sometimes seem all too real.

 

The Silicon Valley giant suspended an engineer last week after he complained that the company’s AI system LaMDA appeared “sentient,” an allegation that Google formally rejects. While a number of specialists told AFP they shared the same level of skepticism regarding the consciousness claim, they added that human nature and ambition may easily cloud the matter. According to Emily M. Bender, a professor of linguistics at the University of Washington, “The problem is that… when we encounter strings of words that belong to the languages we speak, we make sense of them.” We are performing the labor of envisioning a mind that is not there.

LaMDA is a tremendously strong system that can replicate how people converse in written chats by using cutting-edge models and training on more than 1.5 trillion words. According to Google’s explanation, the approach is based on a model that studies how words relate to one another and then predicts what words it believes will appear next in a sentence or paragraph.

 

Shashank Srivastava, an assistant professor of computer science at the University of North Carolina at Chapel Hill, claimed that “it’s still, at some level, just pattern matching.” You can definitely find some threads of what would seem to be important dialogue, as well as some highly original text that they could produce. But in many instances, it quickly deteriorates.

 

However, attributing consciousness becomes challenging. It frequently involves tests like the Turing test, which determines if a machine has succeeded and if a human can hold a written conversation with it without being able to tell. According to Mark Kingwell, a philosophy professor at the University of Toronto, “that’s actually a quite straightforward test for any AI of our vintage here in 2022 to pass.”

 

He continued, “A tougher test is a contextual test, the sort of stuff that existing systems appear to get tripped up by, common sense knowledge or background notions – the types of things that algorithms have a hard time with.

 

‘No easy answers’

In and outside of the tech sector, AI is still a touchy subject that may elicit awe but also some unease. LaMDA’s potential for self-awareness was swiftly and firmly downplayed by Google in a statement. These computers may “riff on any fanciful topic” and mimic the interactions found in millions of phrases, the business claimed. LaMDA has been discussed by hundreds of researchers and engineers, and as far as we know, no one else has anthropomorphized or made generalized claims about LaMDA.

 

At least a few experts perceived Google’s reaction as an attempt to end the discussion on a crucial issue. Susan Schneider, a professor, asserted that public debate on the subject is highly crucial because public comprehension of how troublesome the situation is, is key. The Center for the Future of the Mind’s founding director, who is also a professor at Florida Atlantic University, continued, “There are no simple answers to concerns of awareness in machines.

 

In a time when people are swimming in a tremendous amount of AI hype, as linguistics professor Bender put it, a lack of skepticism by individuals researching the topic is also feasible. Additionally, a tonne of money is being spent on this. The people working on it, therefore, do not necessarily preserve reasonable skepticism, but rather have this very strong signal that they are doing something important and real.

 

Bender referenced studies showing that a language model could acquire racist and anti-immigrant prejudices by completing training on the internet as evidence that AI has recently made poor judgments as well. The dilemma of AI sentience, according to Kingwell, a professor at the University of Toronto, is a combination of the dystopian novels “Brave New World” and “1984,” which deal with concerns like technology and individual freedom.

Share on facebook
Share
Share on linkedin
Share
Share on twitter
Tweet

Related Posts

Authors

kyel
Kyle
a
Jin

About DCC

We believe in the idea of awesome technology education for your children’s future. Our mission is simple, to create mind-blowing tech experiences that inspire students to create the future. Whether it’s programming their own videogame, animating their own cartoon, or building a robot, our industry professionals can help make your child’s technical and artistic dreams a reality.