Claude 3 passed the Tetris test: I asked it to be my mentor

AI just got beaten by AI at being AI. After putting the newest Claude 3 to the test by having it code a Tetris game, I explored its potential as my mentor and daily companion, which, luckily, left me longing for human interactions.

Claude 3 is the newest and most capable large language model (LLM) from Anthropic. It supposedly outperforms GPT-4 or Google’s Gemini in most tests.

“It exhibits near-human levels of comprehension and fluency on complex tasks, leading the frontier of general intelligence,” the creator claims.

And while I had to agree on many things, comprehension is such a strong word.

Tetris test passed

Whenever a new LLM comes out, the first thing I ask it to do is code a Tetris game in HTML/JavaScript.

The Claude 3 is the first large language model that managed to complete the full code for the game and actually worked. Not from the first try, I did have to ask for fixes. And there are still some bugs left to iron out (shapes do not rotate near the edge of the grid). However, it delivered a basic playable game.

ai created tetris

I went even further and asked it to invent a new, similar game, and Claude 3 ‘created’ a “Color Blocks” game, where the objective is to match the color of falling blocks with the same color blocks on the ground. It’s primitive and buggy, but the basic mechanic works.

Some reviewers hinted that Claude 3 is the best LLM for coding. While this little experiment doesn’t conclusively prove it, it does align with others’ experiences.

Tried it as my mentor: it still doesn’t feel “human”

Claude 3 stirred some controversy when one researcher noticed it demonstrating interesting behavior – it seemed to suspect being evaluated. This metacognitive reasoning sparked speculations about AGI being closer.

My colleague had an experiment where ChatGPT ‘controlled’ her life for a day. And it went great. She felt happier and more productive. Was it because of the AI’s guidance or her own decision to follow reasonable advice? Who knows.

“Maybe I should try an even more powerful AI as my mentor,” I thought to myself. I could become a better version of myself, acing productivity and value creation to serve the future AI overlords.

I introduced myself to Claude 3, stating this goal. It quickly took to the task. In the answers, closed captions were even added in italics for me to understand its vocal expression as if it was a real character:

“clears throat and speaks in an energetic, motivational tone:

Alright, let's get started! As your coach, my role is to help you identify your goals, develop an action plan, and provide the accountability and support to help you achieve success…”

What followed was a lengthy conversation with something that really focused on its tone. Sometimes, it was a “focused and determined tone,” an “energizing, motivational tone,” or “voice focused on personal growth,” and even “a firm, no-nonsense coaching voice.”

But Claude 3 was always “ready to roll up my sleeves and get into the trenches with you on this.” It tried to help sharpen my “tools and remove every roadblock,” so I could operate “at peak productivity and excellence.”

It taught me how to start strong, segment the day into batched time blocks, detox from distractions, “beat the clock,” and rely on the two-list method to concentrate on priorities.

Claude 3 went deeper and was interested in my journalistic career, suggesting I should expand my expert interview sources, attend more events, monitor niche online communities, and conduct polls on hot topics. It found three key qualities I should develop, which, it believes, are “resilience,” “insatiable curiosity,” and “resourcefulness.”

I told it that I was down, and Claude 3 ordered me to stand up and “give” it 25 jumping jacks to get my “blood pumping.”


Judging only by its output, which is often accurate, “comprehensive,” and resembles what a real person might say, Claude 3 could appear as your perfect buddy, mentor, or whoever you want it to be. The only problem, it isn't.

It doesn’t care about me or anyone at all. Like with any other LLMs, you provide the prompt, and it spits the output. It won't reach out to you to check how it’s going. It won’t ask what progress you made at the end of the day. It has no presence, no initiative, or inner guidance. It is still not comparable to a human, especially the one you could trust or value.

The real driving factors for improvement, either your own motivation or external stimulation, are all human. Even if Claude 3 concentrates all human knowledge into the most statistically significant answer possible, this is still not a human experience.

That being said, Claude 3 is great where it’s supposed to be.

Maybe the new best tool for productivity

I’m already tired of the AI frenzy and hate contributing to it myself. But we all want to be more productive, creative, happy, and purposeful versions of ourselves. And AI is underutilized for that.

As a journalist, I often ask LLMs to explain new topics, concepts, and inner workings of something I do not fully understand and need to grasp quickly.

While a journalist is responsible for an original story or news cover, AI can help with definitions, serve as a dictionary, provide some context for understanding, do translations, check spelling, or suggest different styles. AI is great at summarizing long documents.

Google is rightfully worried about AI’s threat to its Search because AI is much better at providing actual information. In this case, AI replaces another automated system, not a human.

Compared to ChatGPT, which is very suspicious about my cybersecurity prompts, Claude 3 is less deliberate in responding to a prompt without refusal.

Evaluations by Anthropic:

And to me, Claude 3 helps to distinguish which tasks are human and which should be dedicated to machines. Good AI-written text is not a good story.

While lawyers are responsible for legal defense, AI can help prepare good documents.

While doctors are responsible for a patient's health, AI helps with faster and more accurate results and imaging analysis.

While teachers are responsible for students’ knowledge, AI can make the journey personalized and a lot more effective.

Image generation will also have its place, even if I’m repelled by the decision of some news websites to use AI-generated images, abandoning human presence. Would you value the Mona Lisa the same if it was painted by AI?

And even with fake news and deepfakes – who’s really responsible for them? AI tools or the human operators who utilize them?

We should not underestimate people due to the AI revolution. Hopefully, in a world filled with AI-generated content, there will be an increased demand for human presence.

The final test question was given to both ChatGPT and Claude's 3 most powerful versions. Here, you can compare the answers:

What are the human qualities AI cannot replace?


Artificial intelligence has made significant strides in mimicking certain human-like behaviors and tasks, but there remain intrinsic human qualities and abilities that AI cannot replace, including but not limited to:

Emotional depth and empathy: AI can simulate responses to emotional cues and can be programmed to recognize human emotions to a certain extent, but it does not experience emotions or genuine empathy. The complex, nuanced, and deeply subjective nature of human feelings is beyond the capability of AI.

Consciousness and self-awareness: Humans are self-aware and possess consciousness, an understanding of their own existence and the ability to reflect on their thoughts and experiences. AI lacks consciousness and self-awareness; it operates purely within the parameters of its programming and does not have an understanding of "self."

Ethical and moral reasoning: While AI can be programmed to follow ethical guidelines, it does not possess innate moral understanding or the ability to navigate complex ethical dilemmas in the way humans do, which often require a deep, contextual understanding of right and wrong.

Creativity: AI can generate new content by recombining existing information in novel ways, often referred to as computational creativity. However, the spontaneous, inspired creativity that humans exhibit –driven by experiences, emotions, and abstract thinking –is something AI cannot genuinely replicate.

Intuition: Human intuition is a form of knowledge that comes without conscious reasoning, often as a result of experience. AI can process data and identify patterns, but it does not have intuition in the human sense.

Adaptability and learning: While AI can learn from data and adapt within the scope of its programming, human learning is much more flexible. Humans can learn from a wide variety of experiences, adapting their behavior and understanding based on very limited information and in unpredictable environments.

Social interaction: Humans are inherently social beings and navigate complex social dynamics that are informed by cultural norms, body language, tone of voice, and emotional context. AI can simulate social interaction but does not participate in it with the depth and understanding of a human.

Physical dexterity and mobility: Human physical dexterity and fine motor skills, especially those requiring the coordination of eyes and hands, are difficult to replicate in robots and AI systems. While robots have become more adept, they still struggle with tasks requiring the adaptability and finesse of human movement.

Value judgment: Humans can make decisions based on subjective values, personal experiences, and cultural contexts. AI lacks the ability to truly understand and weigh such abstract concepts when making judgments.

Existential and philosophical understanding: Humans are capable of pondering their existence, asking philosophical questions, and seeking meaning in life. AI lacks this existential understanding and the capacity for such contemplation.

It's important to note that the boundaries of what AI can and cannot do are constantly being explored and pushed by researchers and developers, but certain aspects of the human experience


While AI has made significant advancements, there are several human qualities that AI cannot fully replace:

Empathy and emotional intelligence: AI can simulate empathy, but it lacks the ability to truly understand and share human emotions on a deep, personal level.

Creativity and originality: Although AI can generate creative outputs, it relies on patterns and data from existing works. Human creativity is often spontaneous, original, and driven by imagination and personal experiences.

Critical thinking and intuition: Humans can make decisions based on intuition, gut feelings, and critical thinking, which AI may struggle to replicate accurately.

Adaptability and common sense: Humans can adapt to new situations and apply common sense to solve problems, while AI relies on the data and algorithms it has been trained on.

Interpersonal skills and relationship building: Building genuine relationships, trust, and rapport with others is a uniquely human skill that AI cannot fully replace.

Moral and ethical judgment: AI can be programmed to follow ethical guidelines, but it lacks the human capacity for moral reasoning and making context-specific ethical judgments.

Self-awareness and consciousness: AI systems do not possess subjective experiences, self-awareness, or consciousness in the same way humans do.

Passion and motivation: Humans are driven by intrinsic motivation, passion, and a sense of purpose, which AI does not inherently possess.

While AI can complement and augment human capabilities in many areas, these unique human qualities ensure that AI cannot entirely replace human beings in all aspects of life and work.