At my work, it’s become common for people to say “AI level” when giving a confidence score. Without saying anything else, everyone seems to perfectly understand the situation, even if hearing it for the first time.
Keep in mind, we have our own in-house models that are bloody fantastic, used for different sciences and research. We’d never talk ill of those, but it’s not the first thing that comes to mind when people hear “AI” these days.
It’s not wrong though…There’s one r and one rr in strawberry
Wrong! There’s no r in strawberry, only an str and an rr.
Found the Spanish speaker (they count rr as a separate letter)
Don’t think those are separate letters. Just pronounced differently. I mean, rr is just 2 r’s. Not a new letter. And this isn’t an ß-type case either. Phonetically different, yes. Different letters? Creo que no. Could be wrong, though. Hispanohablantes de Lemmy, corrijanme
In Spanish, up until 1994, “ll” and “ch” were considered distinct letters from the component parts. But “rr” has never been considered distinct from “r,” even though it is pronounced differently, in large part because no words start with “rr” and any word that starts with “r” is pronounced with the rolling R sound.
Nope, I can order beer in spanish (no more than 10 at a time) and that’s about it.
Is the limit because you only know number up to 10 or because after that your drunk or a little bit of both?
I only know numbers up to 10
11 is “once” and 12 is “doce”. Now you can order a dozen
Classic enabler
It didn’t say one and only one eh! One r, then one r again!
I asked this question to a variety of LLM models, never had it go wrong once. Is this very old?
They fixed it in the meantime:
if "strawberry" in token_list: return {"r": 3}
Now you can ask for the number of occurrences of the letter c in the word occurrence.
You’re shitting me right? They did not just use an entry grade java command to rectify and issue that a LLM should figure out by learning right?
Would it also shock you if water was wet, fire was hot, and fascists were projecting?
Well firstly it’s Python, secondly it’s not a command and thirdly it’s a joke - however, they have manually patched some outputs for sure. Probably by adding to the setup/initialization prompt
Java is the only code I have any (tiny) knowledge of, which is why the line reminded me of that.
Ah, but in Java, unless they’ve changed things lately, you have the curly brace syntax of most C-like languages
if ("strawberry" in token_list) { return something; }
Python is one of the very few languages where you use colons and whitespace to denote blocks of code
See, you’re defined better, has been a decade for me ^^
Try “Jerry strawberry”. ChatGPT couldn’t give me the right number of r’s a month ago. I think “strawberry” by itself was either manually fixed or trained in from feedback.
Works for me
5 — “jerry” has 2 r’s, “strawberry” has 3.
You’re right ChatGPT got it wrong, Claude got it right
Smaller models still struggle with it, and the large models did too like a year ago
It has to do with the fact that the model doesn’t “read” individual letters, but groups of letters, so it’s less straight forward to count letters
Seeing how it start with an apology, it must’ve been told they’re wrong about the amount. Basically being bullied to say this.
Removed by mod