Odd problem - 14M2 works, 20M2 doesn't

Jeremy Harris

Senior Member
I've spent the last few hours trying to bottom out an odd problem. I have an OpenLog connected to a 20M2, so that C.3 transmits serial data at 9600 baud to the OpenLog and C.4 receives serial data from the OpenLog. The received data is in ASCII and consists of five ASCII characters read from a file on the OpenLog.

The problem I have is that I used a snippet of code that I'd used on a 14M2 and just changed the pin numbers to use it on the 20M2. This works, in as much as commands can be sent to the OpenLog and data can be received, but for some reason the ASCII receive capability seems to not work on the 20M2 (but works perfectly on the 14M2).

First, here's the working 14M2 test code to read five characters from a file on the card in the OpenLog and spit them out to the terminal:

Code:
;Test code for OpenLog


#Picaxe 14M2
;OpenLog TX = c.1
;OpenLog RX = c.0
;tested and works perfectly on 14M2

symbol CTRLZ = 26										;ctrl Z

setfreq m8											;set 8MHz clock to allow 9600 baud comms

init:					
	
	pause 200
	
	high c.0										;this is needed to ensure serial command works first time on OpenLog port

		
	serout c.0,T9600_8,(CTRLZ,CTRLZ,CTRLZ)					;force command mode on OpenLog (should default to this mode)			
	
	serin c.1,T9600_8,(">")								;wait for cursor as indication that OpenLog is ready to receive commands
	
main:
		
	serout c.0,T9600_8,("read DATETIME.TXT 0",CR)				;read datetime.txt file (should have 5 ASCII characters)
		
	serin c.1,T9600_8,#b0,#b1,#b2,#b3,#b4		

	sertxd (#b0,CR,LF,#b1,CR,LF,#b2,CR,LF,#b3,CR,LF,#b4,CR,LF,CR,LF)
			
	pause 2000
	
	goto main
	
END
Next, here's the same code for a 20M2 that stalls at the serin command (only change is that the port designation is different):

Code:
;Test code for OpenLog


#Picaxe 20M2
;OpenLog TX = c.4
;OpenLog RX = c.3
;Doesn't work on 20M2

symbol CTRLZ = 26										;ctrl Z

setfreq m8											;set 8MHz clock to allow 9600 baud comms

init:					
	
	pause 200
	
	high c.3										;this is needed to ensure serial command works first time on OpenLog port

		
	serout c.3,T9600_8,(CTRLZ,CTRLZ,CTRLZ)					;force command mode on OpenLog (should default to this mode)			
	
	serin c.4,T9600_8,(">")								;wait for cursor as indication that OpenLog is ready to receive commands
	
main:
		
	serout c.3,T9600_8,("read DATETIME.TXT 0",CR)				;read datetime.txt file (should have 5 ASCII characters)
		
	serin c.4,T9600_8,#b0,#b1,#b2,#b3,#b4		

	sertxd (#b0,CR,LF,#b1,CR,LF,#b2,CR,LF,#b3,CR,LF,#b4,CR,LF,CR,LF)
			
	pause 2000
	
	goto main
	
END
I've reluctantly concluded that there may be a problem that's related to the way that the 20M2 handles serial input. I've ascertained that it will receive raw binary OK (if I remove the # qualifiers it receives a CR, LF, and then the first two bytes of data as raw binary, followed by another CR). The serin command just seems to lock up if I try and use the #qualifier though.

Does anyone know why the #qualifier works OK for serin on the 14M2 but doesn't seem to on the 20M2?

Not sure if this is the problem or not, but having been chasing my tail for a fair while over this I'd appreciate another view!
 

marks

Senior Member
Hi Jeremy Harris,
Ive found usually on the 18m2 you need to run at 16mhz at least to receive a sentence at t9600
the 14m2 may be just that bit quicker lol
correct pullup also improves reliability.
 

bpowell

Senior Member
I've ascertained that it will receive raw binary OK (if I remove the # qualifiers it receives a CR, LF, and then the first two bytes of data as raw binary, followed by another CR). The serin command just seems to lock up if I try and use the #qualifier though.

Does anyone know why the #qualifier works OK for serin on the 14M2 but doesn't seem to on the 20M2?
Are you receiving the data you expect to in Binary mode? Should there be a CR, LF, Data0, Data1, CR? If that's all correct, then you're receiving the serial just fine...maybe you're bumping into a timing issue because you're converting it to ASCII on the fly? Why not up your frequency to 16MHZ? If not for the whole program, then at least for this serial receive portion? See if that works.

Why it works on the 14M2 and not the 20M2? Can't say...without knowing the chips in and out, and the firmware associated...I can only speculate the firmware might have "more to do" on the 20M2, and you're running into the limit of how fast it can process the input, convert to ascii, etc...just a guess though.
 

Goeytex

Senior Member
The 20M2 cannot reliably receive serial data at a baud rate above ~9615 baud using software serin set to 9600_8. The actual range of serin on a 20M2 when set to 9600_8 is from 8968 to 9615 baud. It is likely that the data is being sent is right at or slightly above 9600 baud. 9615 baud is the absolute maximum that can be received at 9600_8. It really wants the data at 9215 which is the center.

On the other hand, the 14M2 with serin set to 9600_8 can reliably receive data from between 9280 and 9925 baud with the center being ~9602 baud. This is right on spec.

Setting the 20M2 to 9600_16 should solve the problem since the serin range at 9600_16 is from 9132 to 10526 baud with the nominal center being 9829. This is very accurate and is the only setting I will consider using when I need to receive data using serin at 9600 baud on a 20M2.

If the problem were related to processing overhead it would have shown up on the 14M2 set to 9600_8. It would be a mistake to ignore, gloss over or obfuscate this serin accuracy problem of the 20M2 when set to 9600_8.
 

bpowell

Senior Member
The 20M2 cannot reliably receive serial data at a baud rate above ~9615 baud using software serin set to 9600_8.
Hi Goeytex,

I thought about this too given the other thread on the forum...however, the 20M2 *IS* reliably receiving the data at 9600_8 (it appears) provided the data is received and stored as binary and not as ASCII...the OP states if he does "serin T9600_8 b0,b1,b2, etc...it works fine...but when he tries to to "serin T9600_8 #b0,#b1,#b2" it doesn't work.

Frankly, I'm not sure why he's trying to receive data in anything but binary...why not receive it as binary, and TRANSMIT it as ASCII...that should take care of the issue entirely...right?

But, back to my point: OP was receiving data just fine until he added the overhead of converting to # during the receive.

I also agree it's odd that the 14M2 can do it while the 20M2 can not...I can only guess (if it's overhead related) that the 20M2 firmware has a few more things to "check" in it's loops between tokenized basic execution that the 14M2 does not.
 

Jeremy Harris

Senior Member
Thanks folks, I think I'm getting closer to understanding this. First off, it doesn't seem to be frequency dependent, as it still fails at 32MHz (and 9600 baud). However, the mystery deepens!

I've just hooked up a test rig on the AXE091 (brilliant bit of kit, BTW) where I've programmed an 08M2 to send five ASCII characters, with a preceding CR/LF and CR/LFs between characters, every 2 seconds. This simulates the OpenLog transmitted data.

Using this rig the 14M2 fails, too, but still receives the raw data OK. If I remove the # from in front of the receive byte locations data is received OK. Using 32MHz doesn't fix it.

There's definitely something odd about the way that the "#byte" works, especially as it works reliably on data from the OpenLog to a 14M2, but seems to fail on data sent at the same rate from an 08M2.

I'm guessing that there's more to this than just a timing problem, especially as the data logger I built a year or so ago using a 14M2 has been working very reliably, 24 hours a day, for more than a year now. If the timing was marginal I'd expect it to glitch sometimes, and it hasn't, ever. Also, the second one I built also works OK, although it's not been used continuously like the original.

Answering the questions:

Yes, data is received reliably as binary, no problem at all. I can get all the bytes out of the OpenLog file, including CRs and LFs, like this.

I can't change the data format. The OpenLog reads the text file and send it out at 9600 baud as is, with standard ANSI style formatting. I don't have the option of telling it to change the data format and the file in question is a text file created on a PC and written to the uSD card.

My only option seems to be to increase the number of received bytes (uses a stack more temporary variable though) and then parse the raw data back into real numbers.



Edited to add:

I've just tried using a 14M2 as the serial data stream generator and another 14M2 as the receiver. Same issue, the # qualifier doesn't work, but raw data can be received with no problem. I'm going to work around this using some other technique, although it's a nuisance as I've already made up the PCB for the project and don't have much programme space left for a stack of bodge code to parse data back to real numbers.
 
Last edited:

bpowell

Senior Member
There's definitely something odd about the way that the "#byte" works, especially as it works reliably on data from the OpenLog to a 14M2, but seems to fail on data sent at the same rate from an 08M2.

...


I can't change the data format. The OpenLog reads the text file and send it out at 9600 baud as is, with standard ANSI style formatting. I don't have the option of telling it to change the data format and the file in question is a text file created on a PC and written to the uSD card.
Okay, I don't want to hijack this thread...but I need help understanding the difference between "b0" and "#b0" when it comes to input...is this even valid?

In OUTPUT, the difference is: If b0 = 65...then "serout b0" will result in the PICAXE sending 1 BYTE with the value of 65...if we do "serout #b0" the PICAXE will send 2 or 3 BYTES with the values of "0" "6" and "5".

But, if the data logger is sending characters over, it's likely sending them in binary bytes...if the logger is sending "A" then it's sending the binary value for "65" therefore, you should just receive "b0" and then, if you want to write the "A" to your terminal, you should SEROUT b0...otherwise, if you want to write the VALUE, you should SEROUT #b0.

You said the data logger is sending in standard ANSI...this is just ASCII (with more characters) so you're receiving a BYTE of data per character.

That's how I understand it at least.

EDIT:

I may have answered my own question...

If the data logger wants to send "5" (as in, 5 events happened) it's sending binary 53, which is the ASCII character for 5, right?
Therefore, you want to SERIN #b0 so you get the value of "5" stored in b0...rather than SERIN b0, which would store the value "53" in b0...is that right?
 
Last edited:

bpowell

Senior Member
I've just tried using a 14M2 as the serial data stream generator and another 14M2 as the receiver. Same issue, the # qualifier doesn't work, but raw data can be received with no problem. I'm going to work around this using some other technique, although it's a nuisance as I've already made up the PCB for the project and don't have much programme space left for a stack of bodge code to parse data back to real numbers.
Well, if it's just a single number (0 - 9) then you can just say, "If b0 => 48 AND b0 <= 57 then b0 = b0 -48"

Still, this is very strange...so the data logger sends a single byte for a number from 0 - 9...then, if the value is 10, the data logger sends 2 bytes? (1 for the "1" and 1 for the "0") how do you account for this in code? This seems odd to me.
 

Jeremy Harris

Senior Member
Okay, I don't want to hijack this thread...but I need help understanding the difference between "b0" and "#b0" when it comes to input...is this even valid?
Seems to be valid and to work for SERIN. For example, the text data stream (as decimal byte representation): 13 10 49 56 13 10 48 53 13 10 49 51 13 10 is returned as three bytes, "18", "5" and "13" (for the data) when the # qualifier is used. I've used this technique in the past to read the data and time from a µSD card and use it to set the RTC (to within about a minute) and also to set operating parameters (such as sample period in a data logger). It's worked well and is documented on my data logger thread fromm a year or so ago.

In OUTPUT, the difference is: If b0 = 65...then "serout b0" will result in the PICAXE sending 1 BYTE with the value of 65...if we do "serout #b0" the PICAXE will send 2 or 3 BYTES with the values of "0" "6" and "5".

But, if the data logger is sending characters over, it's likely sending them in binary bytes...if the logger is sending "A" then it's sending the binary value for "65" therefore, you should just receive "b0" and then, if you want to write the "A" to your terminal, you should SEROUT b0...otherwise, if you want to write the VALUE, you should SEROUT #b0.
Yes, but I have no control over the data format the OpenLog transmits. I'm stuck with the fact that serial data is just a stream of binary with each byte representing either a CR, an LF or a single character. There are lots of CRs and LFs, so this uses a LOT of variables up, only for the data to be discarded when I parse it back. The # qualifier has done an excellent job in the past of allowing me to receive the date and time (in the form D/M/Y H:M) into just five byte variables directly.

You said the data logger is sending in standard ANSI...this is just ASCII (with more characters) so you're receiving a BYTE of data per character.

That's how I understand it at least.
Yes, that's right. The file format is that the transmitted data starts with a CR and LF as a SOF marker, uses CR and LF between character groups (i.e. 2 figure date, like "18",makes up a group) and the file ends with a CR and LF.
 

Jeremy Harris

Senior Member
Well, if it's just a single number (0 - 9) then you can just say, "If b0 => 48 AND b0 <= 57 then b0 = b0 -48"

Still, this is very strange...so the data logger sends a single byte for a number from 0 - 9...then, if the value is 10, the data logger sends 2 bytes? (1 for the "1" and 1 for the "0") how do you account for this in code? This seems odd to me.
The OpenLog is just being used as a file storage system in this application, not as a logger, per se. It sends data as it is written in the text file, so uses CR and LF to separate character groups. For example, the date time format I've been using in the past is:

18
5
13
14
43

Which represents 18th May 2013, 14:43. The # qualifier has correctly decoded this on the 14M2 to five byte values, which I can then use to set the date and time as the card is read. I also use the same technique for setting operating parameters, it's nice and easy to just write a text file to a µSD card on the PC, stuff it in the Picaxe based logger and have it set things up.
 

bpowell

Senior Member
Seems to be valid and to work for SERIN. For example, the text data stream (as decimal byte representation): 13 10 49 56 13 10 48 53 13 10 49 51 13 10 is returned as three bytes, "18", "5" and "13" (for the data) when the # qualifier is used. I've used this technique in the past
I'd be curious to know how the PICAXE knows what's it's expected to receive...for instance...binary 49 = "1" and binary 56 = "8"...so if you say, "serin #b0" and expect an "18" in the value...how does the PICAXE know to take two bytes being thrown at it, and concatenate them into one value? Why doesn't the PICAXE just stop at the first byte (49) and say the value is "1"? What if you were throwing 3 bytes at it? "49, 49, 56" and and expected "118" as the value?

I think if we had a better understanding of what the PICACE (the 20M2 specifically) is doing "under the hood" to interpret these bytes using the #BYTE qualifier, we might understand what's going on.

What you're doing is legit, and in the manuals...but I don't understand how it works enough to help troubleshoot...sorry I can't help more!
 

John West

Senior Member
This sounds like a job for Super Technical! However, it's Saturday in many parts of the world, and he may have better things to do. :)
 

Jeremy Harris

Senior Member
This sounds like a job for Super Technical! However, it's Saturday in many parts of the world, and he may have better things to do. :)
I think you're right!

There's no panic now, though, as I've managed to sort out a workaround for this particular project. On the environmental data logger project I used files on the µSD card to both set the date and time on the RTC and to set the sample period for logging. The current project only needs a way to set the date and time on the RTC.

So, so I've dug out an old project that I made a while ago that is just an MSF clock. It uses an 08M to decode the MSF data/time signal from a small receiver and display it on an LCD. I only use it occasionally as a way to get an accurate time when setting clocks around the house. Anyway, when I built it I had the foresight to include a spare 3.5mm jack connected to a serial output. This squirts out the date/time data once a second, as 7 bytes of raw data.

By hooking up a spare pin (with a weak pullup programmed) on the 20M2 in the current project to a 3.5mm jack socket, and then detecting when the MSF receiver cable is plugged in by looking for that line to go low (the receiver uses N2400) I can divert off to a subroutine to read the data stream and reset the RTC in the new project.

This means I don't now need to read from the µSD card at all, and can just write data to it.

I'll post details of the finished project later. It's an air quality monitor and logger, that measures CO2, relative humidity and temperature, displays it on an LCD and logs data every 6 minutes to a µSD card. The CO2 measurement was the tricky bit, but luckily I managed to buy some surplus, very high spec, non-dispersive IR sensors cheaply that are nice and easy to use, as they have a serial interface.
 

hippy

Technical Support
Staff member
I think if we had a better understanding of what the PICACE (the 20M2 specifically) is doing "under the hood" to interpret these bytes using the #BYTE qualifier, we might understand what's going on.
Quite simply, when SERIN #var is used, the firmware starts taking received bytes as ASCII characters, it waits until it gets the first digit character "0" to "9" then accumulates further digit characters until it receives a non-digit character. Once that's done it stores the accumulated decimal value in the variable, moves on to the next token in the SERIN command, and, when there are no more tokens, continues with the next command.

The SERIN cannot guess how many character digits will be arriving nor when they will arrive so it has to wait until a non-digit character is received to know there are no more.

The common problem is if people send from VB or some other programming language and send, say a number 123 as "1", "2" then "3". The SERIN will be waiting for another digit or a non-digit, cannot proceed at that point.
 

Jeremy Harris

Senior Member
Quite simply, when SERIN #var is used, the firmware starts taking received bytes as ASCII characters, it waits until it gets the first digit character "0" to "9" then accumulates further digit characters until it receives a non-digit character. Once that's done it stores the accumulated decimal value in the variable, moves on to the next token in the SERIN command, and, when there are no more tokens, continues with the next command.

The SERIN cannot guess how many character digits will be arriving nor when they will arrive so it has to wait until a non-digit character is received to know there are no more.

The common problem is if people send from VB or some other programming language and send, say a number 123 as "1", "2" then "3". The SERIN will be waiting for another digit or a non-digit, cannot proceed at that point.
Good explanation.

In my case, a typical sequence might be a 1 then an 8 then CR and LF, so the # should return 18, which is exactly what it does with the 14M2. Pity it doesn't do the same with the 20M2, but as the testing I've done today seems to show, there seem to be a fair degree of unreliability with this feature under certain circumstances.
 

bpowell

Senior Member
Quite simply, when SERIN #var is used, the firmware starts taking received bytes as ASCII characters, it waits until it gets the first digit character "0" to "9" then accumulates further digit characters until it receives a non-digit character. Once that's done it stores the accumulated decimal value in the variable, moves on to the next token in the SERIN command, and, when there are no more tokens, continues with the next command.

The SERIN cannot guess how many character digits will be arriving nor when they will arrive so it has to wait until a non-digit character is received to know there are no more.

The common problem is if people send from VB or some other programming language and send, say a number 123 as "1", "2" then "3". The SERIN will be waiting for another digit or a non-digit, cannot proceed at that point.
Awesome explanation Hippy; thanks! It sounds like there is a bit of "decision making" in the #var routine...nice to know how it works though!
 

Goeytex

Senior Member
Given hippy's explanation of how serin #variable works, I wrote the following test code. The first program uses a Picaxe 08M2 to send the data using hserout which is very accurate in regards to baud rate and stop bits. Hserout on an 08M2 does not add extra space between bytes so the output is a true 8N1 which is likely to match most peripheral devices.

When the data is sent at an actual 9600 baud, the 20M2 serin code fails. When the data is sent at 9250 baud everything works as expected. There is nothing wrong with how the 20M2 handles serin #data. There is clearly something wrong with the baud rate of data expected by serin 9600_8 on the 20M2.

For the naysayers and those who refuse to consider that baud rate could be the problem here, I say run the code and test it yourself.

The good news is that it seems to work fine at 9600_16. Not because of decreased processor overhead at 16MHz, but rather because the serin baud rate accuracy is excellent at 9600_16 and very poor at 9600_8

For 20M2 Under Test
Code:
#Picaxe 20M2
#No_Data
#com 1
Setfreq M8
#terminal 9600
Pause 1000

symbol sp = 32

MAIN:

sertxd ("Test Starting",cr,lf)
sertxd ("======================",cr,lf,cr,lf)

do 
  	serin c.2,N9600_8,(">")
  	sertxd ("Qualifier Received",cr,lf)
  
  	serin c.2,N9600_8,#b0,#b1,#b2,#b3,#b4 [COLOR="#008000"]' Receive Five values [/COLOR]
  	sertxd (#b0,sp,#b1,sp,#b2,sp,#b3,sp,#b4,cr,lf)   
  
  	pause 100
  	sertxd (cr,lf) 
loop
Sending Device (08M2)
Code:
'======================================================
'  USING 08M2 HSEROUT TO SIMULATE TRANMSITTING DEVICE
'=======================================================

#Picaxe 08M2
#No_Data
#com 2
#terminal off
Setfreq M16
Pause 100

symbol sp = 32 'Space

b0 = 123
b1 = 245
b2 = 77
b3 = 88
b4 = 99

MAIN:

[COLOR="#008000"]'Select one
'===============================================================[/COLOR]
[COLOR="#008000"];HSERSETUP B9600_8,%10     'BAUD RATE = 9600 8N1 (DOES NOT WORK) [/COLOR]

HSERSETUP 432,%10         [COLOR="#008000"] 'BAUD RATE = 9250 8N1 ' THIS WORKS [/COLOR]
'[COLOR="#008000"]===============================================================[/COLOR]
pause 100
 
	do
		hserout 0,(">")
		pause 200
		hserout 0,(#b0,sp,#b1,sp,#b2,sp,#b3,sp,#b4)
		pause 4000
  
	loop
 

bpowell

Senior Member
For the naysayers and those who refuse to consider that baud rate could be the problem here, I say run the code and test it yourself.
Please see OPs post #6...the 20M2 is ABLE to receive the data just fine provided OP is not using the #VAR function...further, since the ">" qualifier is processing, this means the 20M2 is able to receive and properly decode signals coming from the openlog. If the baud rate were out of spec enough to cause corruption, then the qualifier ">" would be just as corrupted as any other data...further, the 20M2 would not be able to receive the data as straight variables which it is able to do.

Also, is post #6, OP states he tried running up to 32MHZ, and this did not resolve the issue.

I'm not doubting the software-serial baud issue; but that doesn't seem to be the cause of this particular problem...at least as far as *I* understand it.
 

Jeremy Harris

Senior Member
Please see OPs post #6...the 20M2 is ABLE to receive the data just fine provided OP is not using the #VAR function...further, since the ">" qualifier is processing, this means the 20M2 is able to receive and properly decode signals coming from the openlog. If the baud rate were out of spec enough to cause corruption, then the qualifier ">" would be just as corrupted as any other data...further, the 20M2 would not be able to receive the data as straight variables which it is able to do.

Also, is post #6, OP states he tried running up to 32MHZ, and this did not resolve the issue.

I'm not doubting the software-serial baud issue; but that doesn't seem to be the cause of this particular problem...at least as far as *I* understand it.
Spot on. I did check at 16MHz as well, just in case it was a timing error that only occurred at some clock rates, but found the same problem.

There's no issue at all with the serial data reception, it seems to be something specific to the way that the # qualifier works on receive.

I decided I'd spent enough time trying to get this to work, when all I wanted to do was set the RTC, so I've switched to an alternative system for setting the RTC, using MSF date/time data, that works very well. This will get built into a box and used to measure and record air quality around the house, to assess the effectiveness of ventilation methods.

Prototype CO2 meter and logger.JPG
 

Technical

Technical Support
Staff member
Goeytex,
We have just done almost your exact experiment from post 17. It works fine for our 20M2 at 8MHz.
Hippy has also just done it. He's working from home today, so a completely separate setup / test bed.

We used
SETFREQ M8 / HSERSETUP B9600_8 for 08M2 sender.

Brand new chips taken from the shelf stock,
08M2 version 4.A
20M2 version 8.A
Latest version of PE5.

As has been mentioned before, we simply do not agree with your 20M2 results. Your quotes of 'centre baud rates' simply no way match our results.

Our 20M2 results for the 4 settings for serin are:

@4800_4 (same as 9600_8)
Bauds received 4497-4990
Error -1.17% (acceptable)

@2400_4 (same as 9600_16)
Bauds received 2289-2547
Error +0.77% (acceptable)

@1200_4
Bauds received 1136-1273
Error +0.41% (acceptable)

@600_4
Bauds received 570-639
Error +0.82% (acceptable)

You are correct in that 9600_16 is more accurate that the 9600_8. However both are acceptable.

Our only conclusion is that the 20M2 chip you are testing with is faulty.
Is this with the new 20M2 chips we sent you a few months back when we previously suggested your chips may be faulty?
 
Last edited:

Goeytex

Senior Member
Please see OPs post #6...the 20M2 is ABLE to receive the data just fine provided OP is not using the #VAR function..
This is because the the range of serin 9600_8 is reduced on the high side when using #variable vs variable. It goes down about 9500 vs 9630. So if the data is sent at precisely 9600 it will not work with #variable. Yet if the data is sent at 9300 it works fine.

Further, since the ">" qualifier is processing, this means the 20M2 is able to receive and properly decode signals coming from the openlog. If the baud rate were out of spec enough to cause corruption, then the qualifier ">" would be just as corrupted as any other data...further, the 20M2 would not be able to receive the data as straight variables which it is able to do.
No, The fact that the qualifier is correctly processed only means that ONE BYTE was processed ok, not that the serin baud rate is good and that multiple back to back data bytes that follow will be processed correctly.

So a single byte qualifier is processed ok yet subsequent multiple back to back bytes failed? This is typical of what happens when the sending data rate is at the edge of the serin range. A single byte can be received ok, yet two back to back bytes cannot be. Usually, when at the edge, the first byte is good, but the next byte and subsequent bytes fail. 9600 baud is right at the upper edge of what serin 9600_8 will reliably receive. Your conclusion also ignores the fact that when the sending baud rate is within the tested range of serin that it works perfectly at 9600_8.

If the 20M2 Serin 9600_8 firmware had the same baud rate range as that on the 14M2 , we would not be having this conversation.
 

Technical

Technical Support
Staff member
But the whole reason we are making this case so strongly is that your 20M2 results simply do not match ours. We've tested with, quite literally, dozens of 20M2 chips, and have never duplicated your results. See our detailed results above in post #21, they are not the same as yours.
When we get it wrong, we put our hands up and admit it, as with the 20X2.
However your 20M2 results simply do not match anything we have ever seen, hence our assumption that the parts you are using are faulty.
 

Goeytex

Senior Member
@Technical

My parts are not faulty. The are the brand new ones that were sent.

However your test is NOT "almost exactly the same" as mine and does not reflect real world conditions or what people actually do with serial comms. Your setup only tests whether or not one single byte can be received. I get similar results when receiving one single byte. But testing only one byte is not a meaningful test as it hides the issue.

Unlike serout, serin CANNOT be properly, accurately, (or fairly) tested by receiving only one byte of data. A proper serial comms test will receive a stream of data ( more than 1 byte) while the data is sent at a true 8N1.

Below are more meaningful test results of the 20M2 with serin N9600_8.

Receiving 1 byte only
serin C.2, N9600_8, b1
max 9975
min 8988

Receiving 2 bytes
serin C.2, N9600_8, b1, b2
max 9828
min 8988

Receiving 3 bytes
serin C.2, N9600_8, b1, b2, b3
max 9615
min 8988

Receiving 5 bytes
serin C.2 N9600_8, b1, b2, b3, b4. b5
9569 max
8988 min

Receiving 8 bytes
serin C.2, N9600_8, b1, b2, b3, b4, b5, b6, b7, b8
9569 max
8988 min

Receiving 16 bytes
serin C.2, N9600_8, b1, b2, b3, b4, b5, b6, b7, b8, b9, b10, b11, b12, b13, b14, b15, b16
9569 max
8988 min

More than 16 bytes
9569 max
8988 min

Note that from 5 bytes and beyond serin 9600_8 works OK from 8988 to 9569 baud. 9569 is the absolute max with 5 bytes and beyond and is of course not good. It means failure when the data is sent at a precise 9600 N81. The first byte and possibly the second will received ok but one or more of the subsequent bytes will fail.
 
Last edited:

hippy

Technical Support
Staff member
However your test is NOT "almost exactly the same" as mine and does not reflect real world conditions or what people actually do with serial comms. Your setup only tests whether or not one single byte can be received. I get similar results when receiving one single byte. But testing only one byte is not a meaningful test as it hides the issue.
As Technical wrote ...

We used
SETFREQ M8 / HSERSETUP B9600_8 for 08M2 sender.
Those were the only changes made to your 08M2 code though I had to change the #COM port number.

The test being used is -

Code:
'======================================================
'  USING 08M2 HSEROUT TO SIMULATE TRANMSITTING DEVICE
'=======================================================

#Picaxe 08M2
#No_Data
#com [b]1[/b]
#terminal off
Setfreq [b]M8[/b]
Pause 100

symbol sp = 32 'Space

b0 = 123
b1 = 245
b2 = 77
b3 = 88
b4 = 99

MAIN:

'Select one
'===============================================================
[b]HSERSETUP B9600_8,%10[/b]     'BAUD RATE = 9600 8N1 
'===============================================================
pause 100
 
	do
		hserout 0,(">")
		pause 200
		hserout 0,(#b0,sp,#b1,sp,#b2,sp,#b3,sp,#b4)
		pause 4000
  
	loop
The 20M2 test program was exactly as in post #17; I didn't even need to change the #COM port number.
 

Technical

Technical Support
Staff member
But testing only one byte is not a meaningful test as it hides the issue.
We don't test with one byte. We test by analysing the assembler code that we have written, as well a practical tests. We know *exactly* what goes on inside the chip. We know it line by line. We know exactly what goes on behind the scenes. To the exact microsecond on each of the 4 time settings. Hence we don't intend to waste any more time in this discussion as we are just going around in circles. We are happy with the performance of our product.
 

Goeytex

Senior Member
...we don't intend to waste any more time in this discussion as we are just going around in circles. We are happy with the performance of our product.
Ok then I guess that is settled. And is clear that Picaxe users can never expect that the 4800_4, 9600_8, etc issue will ever be addressed in firmware. That's too bad because the Picaxe is a fine product and Rev-Ed could do much better 'with this problem' if they wanted to.
 

Jeremy Harris

Senior Member
Ok then I guess that is settled. And is clear that Picaxe users can never expect that the 4800_4, 9600_8, etc issue will ever be addressed in firmware. That's too bad because the Picaxe is a fine product and Rev-Ed could do much better 'with this problem' if they wanted to.

FWIW, I remain wholly unconvinced there is a real problem here. I didn't do a lot of testing, but did replicate the tests that you did, and those that Technical and hippy have repeated. I didn't have a single serial data problem, so, like Technical and hippy, I'm inclined to the view that there isn't a practical problem here at all. The generally accepted baud rate tolerance is +/-2%, and it seems that the majority, perhaps all, Picaxe chips are inside these limits.

Given that there are now three sets of tests that don't agree with the results you've obtained, I'm reasonably convinced that baud rate variance is not the root cause of the # issue that I was having trouble with. There may well be issues with latency, related to the processing overhead associated with all software serial ports, but I really don't see that there is a generic problem with baud rates for all practical purposes.

If I have time tomorrow I may sit down and do some more definitive testing, using a non-Picaxe data source with a known baud rate accuracy, to try and see if I can bottom out the odd results that you've been getting.
 

Goeytex

Senior Member
FWIW, I remain wholly unconvinced there is a real problem here.
And I remain wholly convinced that there is. I suspect that there could be as many as three problems related to your initial post. Baud rate of serin, latency, and the quality of the data sent by the OpenLog device. Sometimes we look at things in black or white, that a problem could only be caused by one thing or another and by doing so fail to consider all possibilities.

If I have time tomorrow I may sit down and do some more definitive testing, using a non-Picaxe data source with a known baud rate accuracy, to try and see if I can bottom out the odd results that you've been getting.
That's a good idea and I hope you do. I encourage you to be objective and thorough. And remember my tests and conclusions are based upon the device sending the data at a true 8N1, meaning there is no extra space added between bytes. Most Picaxe chips do not send serial data at a true N81, extra space added to compensate for potential latency problems. The only way to know if the data is sent at a true N81 is to scope/analyze the data signal.

Hippy's results are consistent with the data being sent at 8N2, or 8N3, which, in practical terms, is not any different than sending a single byte.
 

Paix

Senior Member
I well understand what Goeytex is saying and because of my background I think of serial communications as if it was coming directly from a paper tape transmitter at machine speed, not adjusted by extra padding between characters.

The teleprinting/Teletype equipment was set up using a TDMS (Telegraph Distortion Measuring Set) which also ascertained the range of each individual receiver.

It's a way of thinking and whilst you can use various baud rates and claim success, but if you have to do it with extra spacing, then it's really just kiddology. the answer as I see it is that a serial receiver is just that. If you try to get a teleprinter to make tea as well as it's primary function, then there is going to be a degree of disappointment along the way.
 

Armp

Senior Member
...we don't intend to waste any more time in this discussion as we are just going around in circles. We are happy with the performance of our product.
Ok then I guess that is settled. And is clear that Picaxe users can never expect that the 4800_4, 9600_8, etc issue will ever be addressed in firmware. That's too bad because the Picaxe is a fine product and Rev-Ed could do much better 'with this problem' if they wanted to.
Unfortunately Goey since PICAXE does not actually provide ANY specification for Transmit data rates, Receive operating range or Bit Error Rate then I guess they get to ignore the whole issue..... Bottom line is it works, but the BER is unusable for many applications.
 

Goeytex

Senior Member
Unfortunately Goey since PICAXE does not actually provide ANY specification for Transmit data rates, Receive operating range or Bit Error Rate then I guess they get to ignore the whole issue..... Bottom line is it works, but the BER is unusable for many applications.
Hi Armp,

The only thing I have seen is that they are happy with +- 6% with serout. If we assume that same +- 6% also applies to to software serin then, for example, serin should handle serial data at 9600_8 (8N1) from 10176 to 9024 baud. However, I cannot get serin to work error-free on any of three brand new 20M2's beyond ~9615 baud when multiple consecutive bytes ( 5 or more) of serial data are sent at a true 8N1.

Edit: Corrected value in bold from 9570 to 9615.
 
Last edited:

Armp

Senior Member
The only thing I have seen is that they are happy with +- 6% with serout.
Where did they say that? To handle +/-6% at the Rcvr requires an adaptive receiver that can determine and track the actual incoming baud rate. I designed one many years ago for serial link test evaluation. Some families of chips do have autobaud capability, I don't know if the PICs do?
Edit: The 20M2 hardware does - but is it used?

26.3.1 AUTO-BAUD DETECT
The EUSART module supports automatic detection
and calibration of the baud rate.
Anyway I'm stuck in the house for a couple of days, until the pollen levels go down, so maybe I'll take a look at the dozen or so 20M2s I've got laying around, with my +/- 3ppm logic analyzer.
 
Last edited:

Armp

Senior Member
That post links to an anonymous, simplistic, idealistic analysis, with quite the wrong answer! If that's what RevEd designed to its no wonder there are problems.

Anyone interested should look at the paper from Maxim: "Determining Clock Accuracy Requirements for UART Communications"
http://www.maximintegrated.com/app-notes/index.mvp/id/2141 which derives a realistic value of +/-3.3% TOTAL MISMATCH between the transmit and receive clocks. This 3.3% is split between the TX and RX - so if the receive clock is +/-1% as in the 20M2, the transmit clock can only be +/-2.3%.

BTW The 3.3% is for hardware implementations. A really good bit banged UART can get close, but most don't.

I'm sure we went through this before?
 
Last edited:
Top