Semantic Wordle
There’s a new version of Wordle called Semantle — not one that I “predicted” — that wants you to find the target word by looking not for a chain of spellings but a chain of semantics. For example, if you started with the word “child” you might get to the answer as follows:
- Child
- Play
- game
- Chess
- Square
- Circle
- Donut
- Homer
In short, you’re playing word associations except the associations can be very loose. It’s not like Twenty Questions where, once you get down a track (say “animals”), you’re narrowing the scope until there’s only one thing left. In Semantle, the associations can take a sudden turn in any of a thousand directions at any moment.
Which means it’s basically impossible to win.
It is, however, a good introduction to how machine learning “thinks” about words. Or at least one of the ways. Semantle is based on word2vec, which creates text embeddings derived from an analysis of some large — sometimes very very large — set of texts. Text embeddings map the statistical relationships among words based on their proximities in those texts.
In a typical example, word2vec may well figure out that “queen” and “king” are semantically close, which also might well let it figure out that “king” is to “prince” as “queen” is to “princess.”
But there are, of course, many ways that words can be related — different axes of similarity, different dimensions. Those are called “vectors” (as in “word2vec“). When playing Semantle, you’re looking for the vectors in which a word might be embedded. There are many, many of those, some stronger than others. For example, “king” and “queen” share a dimension, but so do “king” and “chess”, “king” and “bed size”, and “king” and “elvis.” Words branch off in many more ways than in Wordle.
For example, in my first game of Semantle, after 45 attempts to find a word that is even a little bit close to the answer, I found that “city” is vaguely related to it. But now I have to guess at the vector “city” and the target share. The target could be “village”, “busy”, “taxi”, “diverse”, “noisy”, “siege”, or a bazillion other words that tend to appear relatively close to “city” but that are related in different ways.
In fact, I did not stumble across the relevant vector. The answer was “newspaper.”
I think Semantle would be more fun if they started you with a word that was at some reasonable distance from the answer, rather than making you guess what a reasonable starting word might be. Otherwise, you can spend a long time — 45 tries to get “city” — just generating random words. But if we knew a starting word was, say, “foot”, we could start thinking of vectors that that word is on: measure, toe, body, shoe, soccer, etc. That might be fun, and would stretch our minds.
As it is, Semantle is a game the unplayability of which teaches us an important lesson.
And now I shall wait to hear from the many people who are actually able to solve Semantles. I hate you all with a white hot and completely unreasonable passion.[1]
[1] I’ve heard from people who are solving it. I no longer hate them.
Categories: games, machine learning, tech dw
i visit your blog it’s impressive and a comprehensive resoure about wordle today and the wordle today answer .
I played semantic wordle and it is amazing. It is similar to Loldle unlimited but the difference is that in Loldle.org you play with the characters of League of Legends game.