Has anybody played with chatbot to write code?
Some people have. The verdict seems to range from 'perfect' to 'utterly useless' so it's not easy to answer how useful they are, and will likely depend on the chatbot or AI used, what one asks it to do.
The pertinent question is, can ChatGPT or other AI write valid code for a PICAXE which does the job a user wants them to do ?
It is possible but, from what I have seen, while they give the appearance of doing a fine job, they are usually lacking in some important respects. Take for example "pwmout ledPin, pwmValue", looks great on first reading but that's not a valid command, will give a Syntax Error, and will leave the user having to figure out what it should be.
One should also compare what's generated to what someone experienced with PICAXE coding would produce.
The problem with ChatGPT and similar is they aren't 'thinking', aren't applying knowledge, are just mashing together things they have seen others write using statistics and 'proprietary magic'. And it isn't adverse to simply making things up, 'hallucinating' as they call it. I might call it something else if not on a family friendly forum.
If one analyses the output in Post #2 it's no more impressive than anyone with some minimal programming experience and given the PICAXE manuals could produce. Most of it is verbiage, effectively cut and paste in its explanations, verbose expansions of 'C.1 analogue input for a pot', 'C.2 has a LED connected', 'read an ADC', 'output as PWM'. In some respects it's just padding.
It is perhaps good enough for writing a How To for a simple task, though for all it does output it doesn't actually compile and, as noted, doesn't necessarily work. I would also say it's not a great code example. It would be better IMO to have used READADC10 for a 10-bit pot value, use PWMOUT to initialise the PWM then update it with PWMDUTY. I would have personally used a DO-WHILE rather than GOTO.
What worries me is that if it can't get it right when less than a dozen lines of code are required; what chance of getting it right when there's far more complicated code needed ? And, when it gets simple stuff wrong; how can we trust it to get more complicated things right ?
I suspect those hoping AI will save them from the effort of coding will mostly find that it moves effort from writing code to figuring out what code does and why it doesn't work. And that may end up being more work than it would otherwise have been.