Q65-30; Effect of decode delay

Decoding of a Q65 signal can take a while if a rapid Q3 decode is not found. On a typical PC this delay may be a few seconds. The Tx sequence for Q65-30 starts 0.5 seconds after the :00s or :30s marker and continues for 26 seconds. That leaves 4 seconds for decoding before the next transmission sequence starts. HOWEVER, for EME there is typically an echo delay of around 2.5 seconds. That means that the receiving station hears everything 2.5 seconds late. So the receiving station hears a transmission which starts about 3 seconds after the :00 or :30 s marker and continues until about :028.5 or :058.5. This leaves only 1.5 seconds before a response is sent.

Since decoding of a weak signal using AP can take several seconds (especially with a slow PC), this means that if the DX station signal takes too long, you may start sending a reply BEFORE you have a decode the message and know which response to send! When WSJTX 2.4 is in Q65 mode and "Auto Sequencing" is enables, it will start sending a response based on the previous Dx transmission, not the current one. This would be the appropriate response if the Dx transmission had not decoded. If a decode is received and a different response is required and WSJTX 2.4 will immediately switch to that response, but that may be after the transmission has started.

The question then is, what effect does this have on the ability of the Dx station to correctly decode this "mixed" message. Does the Dx station see a reduction in decode sensitivity (or worse, get an incorrect decode). Well the answer to the second question is easy. It doesn't give an incorrect decode. The message is structured with all sorts of error correction and checks which effectively prevent it from giving a false decode. If it decodes, the decode will be correct.

The first question is whether a message which changes 5 seconds into the transmission reduced decode probability. To test this I actually switched messages after 5 seconds. That's significantly longer than is likely to happen in practice, even with a slow PC. I generated three signals using Q65-30b. The signal strength was nominally around -25dB, just strong enough to give about a 50% probability of a single period decode. These three signals were:

  1. "KA1GT DL3WDG JN68" - a standard response to a CQ
  2. "KA1GT DL3WDG -15" - A standard response with signal report
  3. 5 seconds of "KA1GT DL3WDG JN68" followed by "KA1GT DL3WDG -15" for the rest of the transmission

I then looked at 20 different sets of these three files and looked at the probability of decoding each signal in a single period (Q3 decode). Here are the results:

  1. The probability of "KA1GT DL3WDG JN68" decoding in a single period was 45%
  2. The probability of "KA1GT DL3WDG -15" decoding in a single period was 60%
  3. The probability of the mixed message decoding in a single period was 50%

These numbers are essentially the same within the margin of error expected for only 20 samples. The conclusion here would be that changing message 5 seconds into a transmission has negligible effect on decode sensitivity. There were also some examples where messages (2) and (3) averaged to produce a decode, so it seems that the mixed message (mostly #2) could be averaged with the full message (#2) to produce an average that decodes as a Q32

Conclusion

The decode delay (even with a slow PC) has little or no measurable effect on decode sensitivity (probability) for the Q65-30 mode on EME. This is the worst case situation since longer period all have more time between the end of the received message for one station and the start of the transmission from the other.

So the usability of the Q65-30 modes for EME does not seem to be compromised by any decoding delays. Q65-30 is a viable and useful mode. If you see Tx start before your decode has completed, don't worry about it. WSJTX 2.4 will still send the appropriate repose and there will be no decode sensitivity reduction when the message is decoded.

Addendum

You might think that "off course it makes no difference, the messages both start out with the same characters, so obviously it doesn't matter if you switch between them at the beginning", but that's not the way Q65 digital encoding (or JT65 encoding) works. Messages are not sent character by character in sequential order as they would be in CW, RTTY or by voice. For an explanation see, for example, Joe Tayler's paper How Many Bits Are Copied in a JT65 Transmission?

However....In this case with a "Call Call ????" message, in fact the first part of the digital code IS the same for two messages starting with the same two callsigna. In fact about the first 15 symbols are the same (and that's about 4.5 seconds for Q65-30B), so in this case you don't actually lose anything by switching between messages after a few seconds! It's not because the actual characters in the call signs directly correspond to the tones being sent on a 1 to 1 pasic. It's because the sequence of symbols (tones) for each whole message happens to start out with a symbol sequence which is related to the call signs being sent.

If you take a q65-30 message and replace the first 5 seconds with noise (which is the same as going into Tx 5 seconds late) you do in fact lose a little decode sensitivity. I measure it as about a 1dB loss, from around -27dB at the 50% decode level to -26dB. Though you are removing those first symbols which relate to the callsigns, the rest of the message still has that information on another form, along with sync tones and symbols related to the complex error correction codes used to make sure the message can still be decoded even when symbols are lost due to noise.