I have to work with chatbots every day now for my role. It's not something I specifically sought out, and isn't something I was especially hoping to do in my later life, but here I am. It makes me wonder about the hype around AI, as well as its detractors.
I think one problem is that some people just don't know what it is. That includes the techbros at the top as well as the normal people who never paid attention in science. In the middle is a vast swathe of people who know enough to realise that it's not as crazy revolutionary as it might seem.
You see, we've had text-prediction on our mobile phones for years. You know, you type "the" and the phone offers you "cat", and if you tap it, it offers you "sat", and if you tap it it then offers you "on"... and so on... This is fairly simple logical inference. The phone keyboard has watched all the words you type, and when you type a word it then offers the top three most likely next words based on your history.
So far, so good. But what's this new LLM AI then? It's basically just that, but "on steroids"... It has a huge melting pot of all the different ways that people have typed sentences, with all the different inter-relationships" . so while the phone's database might have "WORD=the, NEXT WORDS=cat(30),ship(24),doctor(21)", for each of the words and their occurrence, the LLM has "SYMBOL=the, NEXT_SYMBOL=cat(CONTEXT SYMBOLS=cat, litter,tray,flap), ship(CONTEXT SYMBOLS=anchor,sea,ahoy,matey)"... etc... So when you type "the", it looks to see what words came before, and then offers the next word in context of what has been said so far. (actual LLMs cut the words up into shorter symbols than whole words, but the premise is the same).
So, knowing that an LLM is just big text completion is funny to me that multi-trillion dollar investments are being made in it. There are much more interesting AI techniques out there than LLMs.. .big data analysis, image analysis etc... but we humans do like our toys, and we've always been fascinated with inanimate objects acting like humans...
And so, here I am, now trying to torture these AIs to push against their prompting (basically, those PRIOR_SYMBOLS) are injected silently before every conversation, so the response is the way we want it.
Do I hate this? No, it's actually quite fun. I'm human, and I love the idea that the computer is "talking" to me... and sometimes (rarely) it tells me something interesting and novel, which (of course) I immediately go double-check... But it's a fun toy, and has the possibility to improve my colleagues' lives by acting like an expert PA for each customer before they raise issues... So I'm hopeful it'll beat this "95% of all AI projects fail" metric we've been hearing about.
We'll see... But for now, I'm just sitting here in the corner chuckling at all these techbros losing their sh*t over how amazing LLMs are, thinking they're somehow about to discover "General Intelligence", and yet, all they've really done is beef up the predictive text :)