Not a partridge in a pear tree – a kookaburra in an agave (on the Bondi to Tamarama coastal path, October
2025).
Christmas 2025
Happy Christmas (late again, I’m afraid), and Happy New Year for 2026.
This is the next in a series of end-of-year web pages, started after I was unable to continue my tradition
of sending Xmas cards.
It’s obviously more impersonal than an individually signed card, but at least it contains more information
than a brief greeting.
I mentioned last year that I have decided to stop showing photos (or even names) of
other people on these pages to protect their privacy, and this decision limits what I am able to include.
I also said that I was planning to create a password-protected version of these pages to get around the privacy
issues, but that’s still a work in progress.
UK and Ireland
I’ve explained before that I like to schedule my trips to England in August (to coincide with their school
holidays), and that I also try to arrange to meet up with my friends from my time in New York in a pre-arranged
location.
In 2025 the location was Ireland.
Artificial Intelligence
In the past year, AI has gone from being an obscure niche interest to a constant topic of conversation.
As it happens, it’s a topic that I feel better-placed than most to comment on, having studied neural
networks for a project I was working on a few years ago, so I thought I’d offer a few observations.
First, we need to distinguish between earlier forms of AI and Generative AI (and it infuriates me that much of the
discussion of AI fails to make this distinction).
A good example of classic AI is the use of Neural Networks to recognise melanoma (aggressive skin cancer) in an
image of a suspicious skin discolouration.
To handle this task, an AI can be “trained” with a large set of images, each one tagged to indicate
whether the lesion turned out to be a melanoma or not.
Then, when given a new image to diagnose, it can pick out features of the image that even a skilled human
specialist might not notice, and use those to make a determination than is more reliable than any human decision.
AI models of this type are already in use, and I would trust them with my life.
Generative AI, on the other hand, takes this sophisticated pattern recognition and applies it to information that
it has just made up.
For each word or phrase (or for each pixel or fragment of an image), it tries a number of possibilities, testing
each using an AI process similar to that described above, and selects the one that seems to fit best with what it
already has.
For example, when asked to create a scientific paper, it could draw on its repository of thousands of such papers
and create a form of words that looks plausibly like those other papers, but if the paper describes a new
phenomenon that it does not have in its repository it will have nothing to check against.
There have already been examples where Generative AI has made up citations that are totally fictitious –
they are grammatically correct and they appear to follow the correct form, but they have no basis in reality.
This type of output is often described as hallucination, but the more accurate word is bullshitting (and I
am grateful to an
article in a special edition of the magazine Scientific American on AI for this observation).
Hallucination sounds more forgivable than bullshitting – it suggests that the AI was itself the victim of
some external adverse influence.
But a bullshitter is someone who is careless as to whether the information they are giving out is correct or not,
and no-one respects a bullshitter.
Then there’s the power consumption.
Generative AI makes use of Large Language Models – complex sets of data built up by ingesting vast amounts
of text (much of it copyrighted, and therefore consumed illegally, but that’s another story).
Training a model takes much more power than querying it, and the training of these models takes staggering amounts
of electricity, powering arrays of thousands of the most expensive computer chips currently in use.
All of this comes at an eye-watering cost, and there is currently no known business model that explains how that
cost can be recovered.
Generative AI is a magnificent intellectual achievement, but if its output is not to be trusted, then it’s
of little value other than for entertainment, and it’s difficult to see how that justifies the billions
spent on it.
In case you were wondering (you probably weren’t), no part of my site is generated by AI.