Someone in China got very carried away. Syncglasses G2 are smart glasses that (like all glasses in this segment) have, among others: notifications and video recording. According to the manufacturer, they can also “translate” the behavior of pets. While the former is cool and desirable, the latter… is simply stupid. Downright stupid. To say the least.
It’s supposed to be a classic representative of the new wave smartglasses. There is a camera, screens and sound. Nothing we haven’t seen before from the competition. Manufacturer – Chmu Technology – However, he is trying to stand out with his slogan non-contact health monitoringi.e. monitoring health parameters without physical contact with the body.
Ambitious, but not absurd. Analyzing microvibrations of the skin, breathing and eye movements are areas that researchers around the world are actually working on and it can be done. And while I would be able to justify the high level of health promises, which also implies high expectations… the function of explaining the behavior of pets… I am not. It’s not that it’s unnecessary. However, it is so delicate, based on so many nuances, that it is downright stupid. Why?
Imagine you have a dog in front of you. A strange dog. The glasses tell you: “the dog is happy, happy to see you!”. What are you doing? Let’s say: you try to say hello and extend your hand to him. Suddenly, instead of saying hello, the dog decides to bite you. In China, maybe it wouldn’t be a scandal and the producer wouldn’t “burp” for it, but in Western countries, it would probably make her look like a dick. Therefore, the Chinese producer should probably be literally explained that he probably overdid it and had shorted connections somewhere at the stage of building the list of features.
What exactly is to be translated?
According to a company representative, the glasses use artificial intelligence to “understand animal behavior.” The algorithm was to be trained on large video data sets of dogs and cats. And let’s hope it doesn’t turn out to be the same collection that “flies” on TikTok: where, among others, Chihuahua owners tease them on purpose and then everyone says these dogs are complete psychopaths. AI is supposed to analyze the image, context and – implicitly – the “intentions” of the animal.
The demo footage shows a cat hanging around a litter box. The glasses inform the user that “the litter needs to be replaced.” With all due respect, I probably would have figured it out quicker and without the help of those glasses. People, this is a logical sequence: cat -> litter box -> doesn’t do its thing, just hangs around -> there’s probably a problem with the litter. Besides, it wouldn’t be useful to me. My Yoda (that’s what I called my chihuahua), when he doesn’t want anything, he’s busy with himself. It doesn’t respond to “come” unless you have something in your hands. He only comes called to the bed, to the fridge and to the door when the magic word is said: “pee”. Alternatively, it cajoles, hugs you with its face and “kisses” you when it wants something.
And by the way, are we talking about explaining behavior or about a trivial scene analysis? Because the latter computer vision systems have been able to do for years – and without the need to call it any revolutionary function. This is some marketing oddity, PR delulu.
Biology versus marketing
Understanding animal behavior is not a purely visual problem. It’s a combination of neurobiology, years of research by behavioral scientists, context, the history of a specific individual and subtle signals that often escape even experienced researchers. Reducing it to analyzing video clips from the Internet sounds… sorry. Absolutely idiotic.
Of course, AI can detect patterns. This is what it was created for. He may recognize that the dog is nervously pacing in circles or that the cat is staring at the door for a long time. But attributing “understanding” to this is semantics for the most cunning spin doctors. There is no understanding here. It is only about bringing the pattern to the surface.
Dear producers. Well, there are limits. True? TRUE?
AI finds its way into some devices not because it is necessary there, but because it looks good on sales slides. Sometimes it helps – as in tools supporting people with visual disabilities. Sometimes it’s just a note to put it there. But whether it works properly or changes anything: and that’s a completely secondary thing. And that’s not the point.
A translator for dogs and cats sounds impressive, but it does not solve any problem. It does not shorten time, does not increase safety, and does not bring new knowledge. It is just a curiosity that is intended to generate headlines on the Internet. Even ones like the one in this article.
The glasses were presented at CES, and we learned about their existence from posts on X and other articles. Knowing CES, I know that it’s a pretty safe place to test ideas that will probably never hit the market in the announced form. After a few months, only those solutions remain that really made sense and someone bought them. The Chinese receive generous subsidies from the state treasury, and innovation and entrepreneurship are promoted there (despite the strongly socio-communist political mentality). This sometimes results in ideas that the Western world looks at with a note of disbelief and a strong component of embarrassment.
Read also: This is what smart glasses should look like. You’ve never seen charging like this before
An interesting concept, but unrealistic, even stupid. Maybe someday, but not now. AI here, moreover, solves absolutely no significant problem. After all, the fact that a dog wants “one meal” or “something more” is obvious when the dog only comes to the door. Mine can still scratch furiously with its paws, tickle my feet and terrorize me with its frantic clattering on the panels. In my understanding of Yoda, I would need glasses as much as… a dog needs a kennel. Amen.
