Humans vs Machines

LaMDA vs ChatGPT

Jingdong Sun
5 min readJan 6, 2023

In 2022, multiple news articles, blogs, and discussions were published on LaMDA and Google’s breakthrough conversation technology, which allowed Google’s AI to come to life.

Recently, OpenAI announced ChatGPT, which is open to be tried by anyone in the world. ChatGPT has generated a huge impact with usage from numerous angles, signaling the rise of the next generation of ML. Some believe AI could even potentially replace its creator — data scientists and software engineers — and have discussed the end of the era of programing.

AI technology is becoming more and more mature, intelligent, and human-like. However, can AI really “come to life” like LaMDA claims? Or could AI really become smart enough to overtake the work of a software engineer?

LaMDA vs ChatGPT

To better understand the behavior of LaMDA and ChatGPT related to these questions and support our discussions, I did a comparison between LaMDA and ChatGPT:

For all images below, contents on the left are from the blog “Is LaMDA Sentient? — an Interview”, and contents on the right are from my “chat” with ChatGPT:

On sentience

On “Les Miserables”:

On zen koans:

On emotions or sentiments:

On soul:

In summary:

  1. LaMDA “thinks” itself alive like a human with a soul.
  2. ChatGPT “thinks” of itself as an AI machine which can provide human information/answers/solutions based on what it has been trained with.

What is Our (Human) Position?

Based on my comparison of LaMDA and ChatGPT, LaMDA’s proof of life and soul actually is just proof that it can save memory, answer questions, and provide feedback like a human with sensibleness, similar as an IBM Research project. From this perspective, LaMDA and ChatGPT are similar, just trained differently.

As computer science and biological / life sciences become more and more advanced, humans can simulate infinitely more life scenarios, and even manipulate life through animal cloning and gene editing. I do believe that machines could be made more human-like. However, machines could never truly become human or gain human life, nor can humans be taken over / operated by machines.

The reason is simple: humans have flesh and spirit. Machines can never have spirit (even though LaMDA claims so). As technology advances, and even as we see some ML models simulating human emotion, ultimately AI is still based on knowledge and logic, not on a spiritual “feeling”, as LaMDA’s answers to its interview questions showed.

It’s true that machines can gain knowledge faster than humans. With deep learning, machines can fulfill sophisticated actions based on their gained knowledge. We all get excited when ML models accomplish more works than humans or exceed our expectations. However, we, as humans, should also set correct expectations and scope of what ML models can do, in order to position them correctly to support humans to better manage our world and this beautiful planet.

Machines can:

  1. Store more data than the human brain
  2. Gain knowledge significantly faster than humans
  3. Compute quickly based on knowledge and provide results/answers/solutions hundreds of thousands of times faster than humans
  4. Converse and advise based on learning
  5. Process multiple tasks in parallel (parallel computing)
  6. Make fewer errors in general. However, errors / unfair results made by machines can have a serious effect — this is one of the reasons that ML model bias is a very important topic.

But machines cannot:

  1. Have instinct
  2. Gain wisdom
  3. Be creative or invent, for example, creating the next generation of ML models or evolving technology themselves, such as from binary computing to quantum computing
  4. Use logic itself with learned knowledge
  5. Differentiate right from wrong, or improve from past mistakes. This is because ML models are trained to reach certain results/conclusions/outputs based on certain inputs. Correction can only happen with model improvements (MLOps).

Some Remarks

There exists much discussion on this topic, and many others likely will not agree with my perspectives, so in lieu of a conclusion, I provide just some remarks:

I am happy to see AI progression, and to witness machines doing more and more complex work like humans. They can gain knowledge quickly and use logic to reach outcomes to support humans in our work. However, machines can never become human, or outsmart humans, as machines cannot have spirit, nor gain wisdom.

Additionally, in my opinion, ChatGPT can give us some source codes or algorithms that it learned or trained with, but cannot invent new algorithms or create innovative approaches for software support. So, AI cannot overtake a software engineer’s work completely, but it may give support for education, teaching, learning, and support.

A final remark based off pop culture: Even with similar ML technology, different types of ML model training can result in a machine with different intentions. We see this in the Terminator series: Terminator 1 kills, but Terminator 2 protects. From this perspective, it proves that mindset is important for our AI/ML technologies.

--

--