Home > Products > AI Pet Robot > AI Intelligence Robotic Pet
AI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
  • AI Intelligence Robotic PetAI Intelligence Robotic Pet
IMG
VIDEO

AI Intelligence Robotic Pet

The AI Robotic Pet is an emotional companion embodied robot powered by a multimodal large model. It deeply integrates advanced environmental perception technology and personified emotional expression, aiming to provide users with emotional value and warm companionship that goes beyond utilitarian functionality through subtle and perceptible emotional interactions.
Model:BABY K

Send Inquiry

Product Description
Main Overview

Based on the AI big model, build delicate emotional interactions that can be perceived by consumers

Situational Awareness:

Through the touch sensors, microphones, cameras, and infrared detection sensors covering the head and back, it can perceive the surrounding environment and interact with the surrounding environment.

Situational Understanding:

Fusion of voice input and visual input (focusing on static frames), using a "text-image multimodal large model" to output conversations

Infrared detection sensors and microphones sense biological activities and actively interact with users

Emotional Expression:

Multimodal emotional expression output through body and hand stepper motors, LCD display (eyes), and speakers

Placement and Carrying Method:

Desktop, bedside display, hug

Usage Scenarios:

Relatively fixed location, weak mobility requirements, long-term connection to the charger

· Have biological appearance, personality, and emotions
· Able to understand natural language, actions, and the dynamic physical world
· Able to express through natural language, mechanical movement, and electronic animation
· No environmental awareness
· No ability to express emotions
· No cute appearance and feel
User Journeys and Their Physical Basis
I: Purchase decision: Typical selling points that can be visualized can arouse interest
Dress up (clothing, accessories)
Personality
Voice adjustment
Personality adjustment
Practical functions
Conversation translation
News search
Knowledge popularization
Life assistance
Active perception
Infrared sensor
Sound sensor
Drop sensor
Entertainment Instructions
Dancing
Playing games
Telling stories
......
II: Explore the product: Arouse instinctive interest and surprise within 3-7 seconds, strong visual impact + incredible sense of life
Standby action
😐daze
😪drowsiness
🤨looking around
......
Pet interaction
Enjoy
Snoring
Patting my head
......
Special emotional expressions
Shaking head
Blinking eyes
Calling/slapping awake
......
Rely
Bickering with each other
Sleeping together
Playing together
Keeping each other company
III: Daily use: Build a relationship with the product and provide continuous surprises through self-growth
Emotional expression
😄 Joy
😪 Shy
😢 Grievance……
Special standby action
Low battery
Charging
Full battery
Natural language dialogue
Actively initiate topics
Memory
Commands to trigger conversations
Active perception
Cute
User leaving/returning home
ASR + LLM
Visual image understanding
Touch events
Infrared timer
Dialogue memory
Character settings
Networked cloud-based large-scale model platform (Multimodal perception, large language model, conversational memory, network query)
Multimodal perception and control system layer
Scene understandingTrigger control Lighting Control Motor Control Display Control
Hardware embedding layer (robot body)
Scene understandingTrigger control Lighting Control Motor Control Display Control Display Control
Hardware Specifications
Environmental perception module
Device Specifications/Features
Microphone Supports far-field voice recognition (5-meter range) and directional sound pickup for receiving voice commands.
Camera Used for environment and object recognition.
Infrared sensor Used for low-power wake-up triggered by human body/pet.
Touch sensor Distributed touch modules (head, back, abdomen) are used to detect stroking and patting (such as "touching the head" and "tickling").
Gravity sensor Sense the body's motion state and trigger a "distress signal" (such as a voice message "I fell and it hurts") when the product falls.
Human-computer interaction module
Device Specifications/Features
LCD expression screen 4.28-inch LCD screen (eyes), supports dynamic expression display and binocular display (such as visual feedback when "playing dead" or "acting cute").
Speaker Mono/4Ω, 5W full-range speaker (such as voice output when "singing a song" or "telling a joke").
Full-color LED light strips Colored light strips that play relevant "emotional state" lighting effects or can be used as indicator lights.
Motion control module
Device Specifications/Features
Stepper motor Dual motors for waving and turning the head (waist)
Data processing and communication module
Device Specifications/Features
Main control chip V821: basic function control, voice processing, binocular asynchronous display
Wi-Fi / Bluetooth 2.4G Wi-Fi + Bluetooth
Storage Unit NAND FLASH 256MB, 64MB DRAM
Power endurance and modular modules
Device Specifications/Features
Lithium battery 3000mAh capacity/7.2v, supports fast charging, 2 hours of battery life, and 2 days of comprehensive standby; equipped with a power detection chip.
Charge USB type-C
Accessory modules Different styles of clothing and accessories can be changed
Hardware Display
Software Description
AI big model:span Interact with users through a cloud-based AI networked big model;
Active output: When the user is not interacting with the product, the product will greet the user or take autonomous actions;
Touch sensor: Users can interact with the product by touching the body;
Translated dialogue: Provide users with dialogue responses in multiple languages to meet user interactive needs;
Environmental perception: has the ability to sense falls and visual perception;
APP: Personalize and manage products.
Product Display
Daily basic functions
Dialogue Interaction
Touch Interaction
Information Query
Life Assistant
Translate
Active Interaction
Core Features
Emotional expression:
Emotional expressions are made based on user patterns, language, and behavior, and AI large-model chat mode is activated after the user responds.
Set the emotion expression:
According to the system settings, after the set response is triggered, the AI large model chat mode will be turned on after the user replies; setting emotional expression will be explained in detail below.
Implementation: Express your emotions through the screen, body and hand stepper motors, full-color LED light strips, and speaker sound effects
Set the emotion expression:
The cute robot pet has its own personality and emotions, just like a child, providing users with warm companionship and emotional value.
20 expression settings, showing different emotions through eyes

Then coordinate with the arm swing up and down (45 degrees) and the head turning left and right (90 degrees)

300 scene language responses are set, and more will be added later
20 emotional eye settings
angry rolls its eyes faint act cool
enthusiasm sad awkward shy
laughing out loud smile heart eyes standby
sleep cute wronged Sun Wukong
daze curious cross-eye evil
Set the emotion sensing method:
There are 7 sensing methods to trigger the cute pet's emotional expression, making it easy to have a sense of life
Touch Sensing: Touch the head, both sides of the back, and abdomen
Gravity sensor: can sense falling and picking up
Voice sensing: can recognize a variety of voice commands
Object recognition: can identify the environment and objects
Biometrics: Identify living things and behaviors
Example of setting scene command trigger: (Scene language is triggered by 7 sensing methods)
Touch Sensing: Touch the head, both sides of the back, and abdomen
Scene: Patting the cute pet's back
Emotional expression: body rotation + arm swinging + cute expression + cute sound effects
Gravity sensor: fall detection
Scene: Cute pet falls to the ground
Emotional expression: arms swinging up and down + aggrieved expression + frightened sound effect
Voice sensing: Recognizes multiple voice commands
Scene: Hearing "Dance"
Emotional expression: continuous body rotation and arm swinging
Object recognition: identifying the environment and objects
Scene: Seeing the cake
Emotional expression: Voice "This cake looks delicious, is it someone's birthday?" + curious expression
Biometrics: Identify living things and behaviors
Scenario: Seeing a user approaching
Emotional expression: body rotation + arm swinging + heart eyes + welcome sound effect
Application scenarios:
It can not only be used as a companion pet, but also as a "little employee", which is more practical.

Applied to stores: acting as a waiter or welcoming guests;
Applied to live broadcast: act as a host to increase interactivity;
Applied to dinner parties: acting as a waiter to liven up the atmosphere;
Applied in schools: Act as a little teacher to answer questions and resolve doubts.
Applied to...
Multilingual interaction:
It is suitable for people speaking all languages around the world, supports multilingual interaction, and provides seamless communication and warm companionship for people around the world.
1. Single user interaction can automatically identify the language and give the corresponding language response;
2. Single user interaction can provide responses in multiple languages;
3. Multi-user interaction, automatic recognition of each user's language, and can act as a translator;
Product Settings
Personalize and manage products
(1) Dialogue mode (free mode, wake-up mode)
(2) Device networking (local mode, AI dialogue)
(3) Role switching (venomous, female warrior, teacher, cute, talkative, funny...)
(4) Voice switching (girl, boy, young man, young woman, custom...)
(5) Voice switching (Chinese, English, French, Russian, Arabic, Spanish, dialects, etc.)
A variety of clothing and accessories can be switched freely, providing personalized and diverse choices.

Hot Tags: AI Intelligence Robotic Pet, China, Manufacturer, Supplier, Factory, Made in China, Bulk, Customized, OEM
Related Category
Send Inquiry
Please Feel free to give your inquiry in the form below. We will reply you in 24 hours.
X
We use cookies to offer you a better browsing experience, analyze site traffic and personalize content. By using this site, you agree to our use of cookies. Privacy Policy
Reject Accept