Originally Posted by P Smith
Do you have any comments to posts 316 and 341 ?
Your request was for a comparison of the streaminfo data record structures. This is what I perceive in combining the two interpretations.
typedef struct _K77_STREAMINFO_ENTRY
BYTE bVideoPIDs; // count
BYTE bAudioPIDs; // count
WORD wPCR; // PCR usually same as VPID1
// video info by count = bVideoPIDs; should be max 2
WORD wVPID1type; // = 0105 / 0002
// video langs part
BYTE wnVLangs; // number langs =1/2/3
BYTE wnVLangs_rsv; // =0
// first VLang item
BYTE bVLangId; // = 05
BYTE bVLang_19; // = 00
// next VLang item
BYTE b_1c[0x84]; //
>The first byte in this structure is bVideoPIDs which corresponds to my byte 1 for number of video streams.
>The second byte in this structure is bAudioPIDs which corresponds to my byte 2 for number of audio streams.
>Next a word unk_02 corresponds to my bytes 3-4 with unknown purpose.
>Next a word wPCR which roughly corresponds to a combination of my byte 5 which is video stream id number and byte 6 which is unknown and appears not to be used.
>Next a word unk_06 corresponds to my byte 7-8 which is unknown and appears not to be used.
>Next a word wVPID1 which duplicates wPCR and corresponds to a combination of my byte 9 which is video stream id number and byte 10 which is unknown and appears not to be used-also duplicating bytes 5-6.
>Next a word wVPID1type (values 0105 or 0002). This differs to my interpretation of bytes 11-12 as the audio channel type where 2 is stereo and 5.1 is surround sound. There is a correspondence with audio channel during recording, but it is not perfect and appears to be dependent upon multiple extraneous factors.
>Next is a pair of words wVPID2 and wVPID2type. This differs to my interpretation of bytes 13-16 as unknown. Present TV broadcasts are allowed only one video stream per sub-channel so I am not convinced it is a second video stream information area. Of course it could be a place holder for such information in the future,
>Your next section is video languages which corresponds to the closed caption fields. They are a repeating section. You appear to allow for 3 repeats, I am assuming 4 repeats due to the DVR+ having 4 closed caption options.
>First is byte wnVLangs which corresponds to my byte 17 number of closed caption streams used.
>Second is three bytes wnVLangs_rsv which corresponds to my 3 byte unknown.
>Third is a 4 byte character array cVLang which corresponds to my 3 byte closed caption language code plus an additional byte which is always 0. Indeed, these language codes may be null terminated strings.
>Fourth is byte bVLangId which corresponds to my byte 25 unknown item. If you have an idea as to the byte's purpose please let me know. I have seen values of 4, 5 and 9 but predominately 5.
>Fifth is byte bVLang_19 which corresponds to my byte 26 unknown but value 0. Do you have a function in mind for this byte since it is named, or is it a placeholder to get to a word boundary?
>Sixth is a 2 byte array wVLang_rsv which corresponds to my bytes 27-28 unknown but values of 0.
>At this time you skip down to the audio section of the record with a notation of next video language, whereas I repeat the closed caption service section a total of 4 times.
typedef struct _K77_STREAMINFO_APID_ENTRY
WORD wAPIDtype; // = 010e
char cALang; // "und"/"eng"/"esl"/"spa"/"fre"/"chi"...
BYTE bLangId; // = 04
This is your audio information record entry creating an array of 18 elements. I have a similar entry but have only 2 repeats, not yet finding a third + audio source to check how many audio elements the DVR+ will respond to. This is an area where I may try some bit manipulation of the streaminfo file to see what happens.
>First word in this structure is wAPID which corresponds to a combination of my byte 161 audio 1 stream ID number and byte 162 which is unknown and appears not to be used.
>Second word is wAPIDtype which corresponds to my bytes 163-164 of an unknown item.
>Third is double word rsv which corresponds to my bytes 165-168 of an unknown item but values of 0.
>Fourth is 4 byte character array cALangID which corresponds to my 3 byte audio 1 language code plus an additional byte which is always 0. Once again, these language codes may be null terminated strings.
>Fifth is a byte bLangID which corresponds to my byte 173 unknown item but value of 4.
>Sixth is a 3 element byte array bALang_rsv which corresponds to bytes 174-176 of an unknown item but values of 0.
>At this point you continue to repeat the array, whereas I give a second audio entry only and leave the rest empty.
Overall, the records are very similar except for the number of repeats of the video, closed caption and audio entry structures. I am trying to leave mine with the minimum number of repeats as experienced from the DVR+.
One real difference is in the utilization of wVPID1type within the video entry. In the second referenced post you stated the example of a streaminfo file with elements 2 and 7 having values of 51 instead of the usual 2, and suggested unsuccessfully that they could be codes for 720p or 1080i etc. My interpretation would be the audio channel being stereo vs surround sound. Once again this value is not dependable, with history, menu settings and the DVR+ live record and a "feature" of at times grabbing the live record and adding it to the recording all playing a part. My interpretation would not provide a clean sectioning of the streaminfo file. However I am leaving the next bytes as open where you are repeating a second video information area. I believe that FCC/television rules stipulate only one video stream per the sub-channel so leaving the area open allows for other uses of the space. While there is a correlation for the audio channel to this entry, I am not confident that my interpretation is correct. I have never seen a value of 5.1 for any sub-channel that broadcasts only in stereo (oldies stations) but a jumble of 5.1 and 2 for multi audio stream setups. Right now I can find no utilization for this data anyway.
One area that we do not have a handle on is data streams are allowed to be muxed into channels. I do not know of any being performed in actual practice, but some of the record area could be reserved for this.
One area needing attention is the audio stream area, where there are multiple values being stored to unknown purpose. I think one of these may be a code for AC3 style audio, since I believe differing audio types are allowed by the FCC to be broadcast. I am still looking for a good reference to what the codes for these audio types would be to see if any match.
The video language/closed capture area is also needing some attention. However, when looking at my information I think there may be a number of bugs in the DVR+ software in this area. The one thing that might help is a one minute recording of CW Supergirl which showed two records in a previous post. My attempts have been thwarted at duplicating this due to the local station usurping the feeds in an emergency and a second attempt where no closed caption information was recorded in the streaminfo file. My older version of the DVR+ software may have a bug that failed when two services were available and the newer version that made this record may have been fixed. I am just afraid to update since my system is working and I have read posts where the updates introduced new issues that I would like to avoid. It also appears that the DVR+ may be using a second source for this information, with closed captions being displayed without a record. I need to look at this further. It is also possible that the DVR+ is merely recording the cc information as a log, and actually blindly requesting the captioning from the video stream using a universally defined stream code.
I am going to see if I can perform some bit manipulations to stress test the streaminfo file to gather more information. But this may take a while due to time constraints.